next up previous
Next: Previous work on variable Up: Quantigraphic Imaging: Estimating the Previous: Abstract

.  Introduction: Variable gain image sequence processing


Many papers have been published on the problems of motion estimation and frame alignment; for review see [1]. Most of these assume fixed gain. In practice, however, camera gain varies to compensate for varying quantity of light, by way of Automatic Gain Control (AGC), automatic level control, or some similar form of automatic exposure.

In fact almost all modern cameras incorporate some form of automatic exposure control. Moreover next generation cameras, such as EyeTap devices (http://eyetap.org) that cause the eye itself to function, in effect, as if it were both a camera and display, also feature an automatic exposure control system to make possible a hands free gaze activated wearable system operable without conscious thought or effort. Indeed, the human eye itself incorporates many features akin to the automatic exposure or AGC of modern cameras.

Figure 1 illustrates how such a camera takes in a typical scene.

Figure 1: Automatic exposure as the cause of differently exposed pictures of the same (overlapping) subject matter: (a) Looking from inside Hart House Soldier's Tower, out through an open doorway, when the sky is dominant in the picture, the exposure is automatically reduced, and we can see the texture (clouds, etc.) in the sky. We can also see University College and the CN Tower to the left. (b) As we look up and to the right, to take in subject matter not so well illuminated, the exposure automatically increases somewhat. We can no longer see detail in the sky, but new architectural details inside the doorway start to become visible. (c) As we look further up and to the right, the dimly lit interiour dominates the scene, and the exposure is automatically increased dramatically. We can no longer see any detail in the sky, and even the University College building, outside, is washed out (overexposed). However, the inscriptions on the wall (names of soldiers killed in the war) now become visible. (a,b,c) The differently exposed pictures of overlapping subject matter can be combined to extend dynamic range and tonal definition, or to provide a true photographic quantity ``lightspace'' for intelligent vision systems.
\begin{figure}\figlcrabc{1.1in}{hart_house_soldiers_tower_newer/s008.eps,width=1...
...in}
{1.1in}{hart_house_soldiers_tower_newer/s075.eps,width=1.05in}
\end{figure}
As we look straight ahead we see mostly sky, and the exposure is quite small. Looking to the right, at darker subject matter, the exposure is automatically increased. Since the differently exposed pictures depict overlapping subject matter, we have (once the images are registered, in regions of overlap) differently exposed pictures of identical subject matter. (Registration typically also includes correction of barrel distortion, correction for darkening at the corners of the image such as by $cos(\alpha^4)$, etc., to make the camera become a truly quantimetric instrument.) In this example, we have three very differently exposed pictures depicting parts of the University College building and surroundings.



Subsections
next up previous
Next: Previous work on variable Up: Quantigraphic Imaging: Estimating the Previous: Abstract
Steve Mann 2002-05-25