Photography has been benefiting from the computational capabilities of computers for many years. But so far we have only scratched the surface of what is possible.
Computational photography is the application of computer algorithms and processing to the art and science of photography. It is not new but even today we have only just begun to see the potential. It is time to put your seat belt on.
We have been benefiting from the influence of computers in our photography for a very long time. Optical design was revolutionized by computer ray tracing. It was responsible for the blossoming of more and more complex but quality lens designs of the last 40 years. More recently the development of sophisticated matrix-type light metering was a product of the ability to embed a simple but for the day quite powerful computer within a camera.
If we jump forward to today we have significant in-camera processing capability to handle color profiles, sharpening and more. Software is available that can process images post-shoot to reduce image noise (Noise Ninja ad the like), expand the dynamic range (HDR Imaging) and even expand the depth of field of an image (Photoshop CS4). But this is only the start and many of these solutions do not work as well as we would like.
The problem, at present, is that there is little or no in-camera support for these processes. Sure cameras may offer exposure bracketing that can help with doing HDR, but most cameras still only offer a three-frame bracket, rather than the five to nine that we might want for HDR. Focus bracketing does not exist at all and so far in-camera noise reduction often makes the situation worse. Sadly most camera manufacturers assume that their customers are either people shooting their kids or pros shooting sport or portrait. For many of us the biggest use we get of burst mode is coupled with exposure bracketing to shoot a sequence of exposures in very rapid succession.
Now imagine what is possible. We could have a camera that can shoot a nine-exposure bracket at VERY high speed so that HDR of moving subjects becomes possible. The camera could do the HDR combination in camera and at high speed, so only one file gets written to the memory card (although a large one), or it can be left for processing off camera. A version of this can also be used to address image noise, since HDR does a good job of eliminating shadow noise. Focus bracketing can be supported in camera, as an addition to the in-camera exposure, ISO or white balance bracketing we have now. Again in-camera processing of the focus bracket images could be done, but is not essential. Even without in-camera full processing, if the camera placed multiple images from bracketing into an image stack structure on the memory card it would make life a bit easier later. A new approach to lens and sensor design could allow infinite depth of field and post-shoot selectable focus points and dof without having to take more than one shot. Or we can use the existing approach of merging different focus point shots. We could (and at least one camera already does) capture images before and after we press the shutter to allow us to choose the magic moment after the shot. We could even extract depth information along with the color and brightness data, allowing at least some ability to change viewpoint post-shoot. Multispectral imaging could move out of spy satellites and aerial photography and into your camera, allowing post-shoot choice of infrared, UV or mixed spectrum images. Integration of the camera with accessory robotic panorama heads will make for painless panoramas.
All the above is possible now and has been demonstrated in one form or another. So now we can picture an amazing camera. It shoots 30 frames per second at full resolution for short bursts. This allows for HDRI infinite depth of field shooting of moving subjects. Some degree of processing is done in camera, even if it is just creating image stacks. Focus bracketing is built in. You select the two focal limits before the shot is taken and the camera works out the required steps in between. Because of the high-speed capture you can choose over what time frame the images are saved for later choice, choosing to save five frames before and after the shot, for example. More than just RGB is captured. Depth information is also stored with each pixel for later processing or s extracted later from the focus bracket data. This coupled with automatic GPS recording aids the later production of 3D virtual versions of the location provided we shoot from at least three viewpoints sometime during the time on location. A port allows connection to the motorized tripod head for panorama shooting and the camera will store the images in one image stack ready for processing. This camera could be built today. You are not going to need this for shooting your kids at their ballet class, but there are many professional and serious amateur photographers who would kill for a camera like this. I know I would.
To implement the above fully you would want a sensor that was capable of being read at somewhere between thirty and sixty frames per second. The sensor could be mounted into a two stage mount that as well as providing the in-body image stabilization and anti-dust systems also can micro move the sensor relative to the Bayer filter for higher resolution and genuine full color sampling at each pixel site. The motors used for AF would now have an additional use for focus bracketing and depth extraction. The processor power of the camera would be greatly enhanced by the used of multiple image processing chips to distribute the process load and handle the vast amount of information being produced. A second sensor in the optical path would assist with depth extraction that mainly makes use of the information gained from the focus bracketing process. The processors would need very high-speed buffer memory to hold all the information produced while it is being written to storage cards. Early versions would leave most of the fancy processing to post-shoot work on your desktop or laptop using a sort of super-RAW file. Over time you would have the option to have more processing done in camera.
Such a camera as above would be fantastic for architectural as well as landscape photography, fine art, still life and many other types. These types of photography are characterized by careful setup and low shooting rates, so it really doesn’t matter if, with all these features turned on, the camera can only shoot one image every two to five seconds. In the future this time can shrink massively but it wouldn’t really be essential so long as the actual capture time was short.
But all the above still only represents a start on what computation photography can do. We can see the above from where we are now, but the real potential we have not yet clued into. You can be assured of one thing: photography will always remain photography, but the potential for creative expression will be even greater.