As a followup to my article on the HP professional photography blog, I explore some of the reasons why people might want to explore both ends of this spectrum of practice.
The in-camera vs post-camera photography article produced a lot of interest. Naturally, there is a lot more to say about something like this.
I fall between the two extremes, doing many things in-camera and not being afraid to do a lot on the computer. So lets explore some of the issues.
Most of the things that you can do to an image in Photoshop cause the loss or throwing away of image information. Apply a contrast-enhancing curve, adjust levels, tweak color saturation or dodge and burn and you can loose information from the original data captured by the sensor (or film and scanner combination). That is why many of us only shoot in RAW mode, capturing a greater number of bits per pixel so that after we have done all the manipulation there is still at least 8 bits per channel of data left so that the prints display no banding or similar artifacts. Given that most cameras in RAW mode do not capture a full 16-bits per channel but only 11 or 12-bits, there is a limit to how far you can push the processing before data loss becomes visible. I see this particularly is some of my night infrared work, where the data from the camera only occupies a small section of the histogram and when it is spread more fully you can see the lack of tonal graduations.
While we are considering the data that the camera captures, it is worth remembering also that the actual spatial resolution of the camera can put a real limit of you. I think all of us would like a higher resolution digital camera. When you crop, you are throwing away pixels and resampling in Photoshop or even programs like Genuine Fractals can only go so far in giving you enough pixels to print the size you want. Of course one way around this is tiling, taking multiple overlapping images like you would to create a panorama and stitching them together in Photoshop or one of the dedicated stitching programs. Here is a clear use for post processing.
Given the above, it makes a lot of sense to do what you can at the capture stage to maximize the amount of actual data that the sensor has to capture. Therefore, things like getting the exposure as high as possible without clipping to minimize sensor noise, makes a lot of sense. So does the use of certain filters, such as a polarizing filter to trim those burnt out highlights (allowing you to push the rest of the image higher up the exposure range without clipping) and boost color saturation at the taking stage. Another useful tool is the graduated neutral density filter to pull in a very bright sky. This is all to the point of maximizing the amount of data that the sensor can capture, rather than real special effects. Give the sensor the most information (in the areas you want) and you have much more to work with later, if you want to.
If you follow the above, when you get the images into Photoshop, you will have the maximum possible information to work with. This gives you the maximum amount of choice. I suspect we have all been caught with having a great image that you just can’t push as far as you might like because, due to lack of information, it starts to fall apart when you push. I know I have and it is so very frustrating. If fact you want to cry because you have uncertain whether you can get it again. This is where practice, knowing your gear and taking care at the shooting stage can minimize your grief later.
The histogram display that most digital cameras can display after you have taken a shot (and some before) is a wonderful tool that I believe most photographers under utilize. In fact it is such an important tool that this all by itself is a good enough reason to go digital. I recommend that you spend some time getting very familiar with exactly how this works with your camera(s). I would go so far as the suggest the following exercise:
Setup your camera on a tripod with a scene typical of the sort of work you commonly do, in terms of brightness range, etc.
- Take a shot and get the histogram display up
- Later download your images to the computer but do not delete them off the camera
- Now bring up your test image in your image editing program of choice
- Also bring the same image up on the camera LCD
- Display histograms in both and compare
- Carefully examine your image on the computer and assess any problems
The reason I suggest this exercise is that all cameras do some internal processing in producing their LCD displays, including the histogram. Since most of your evaluation of your images will probably take place on your computer screen, you need to assess whether the image and histogram you get on the camera display is identical to your computer display or, if they differ, in what ways. This allows you to train your eye so that you can better use your camera display to assess images in the field. Look for things like the on-camera display not showing highlight or shadow clipping in the same way as the computer, issues with the way the channel histogram is displayed (if it does) or if the camera just displays an average luminosity histogram (rather than the individual channels) how this relates to the actual channels.
The more you know about all the above the more you can rely, in the field, on the camera histogram. In many situations, of course, bracketing is a great idea as a form of insurance and it can also give you the option of HDR (high dynamic range) work later, if you wish and the image warrants the work.
Sadly, I can only push this image, which I love, so far because of inadequate exposure when I took it.
Now up to this point we’ve been discussing the ‘basic’ manipulations of an image, adjusting exposure, contrast and localized adjustments. When it comes to the massive manipulations of an image, of which Photoshop (and similar) are very capable, the same quality of input criteria applies, perhaps even more so. When you are doing massive adjustments to an image, perhaps blending layers of the same or different images, applying wild filters effects, etc the quality of the resulting image, in terms of continuous tonal and color graduations, still depends on the component image quality. The old computer adage is ‘garbage in, garbage out’ and this certainly applies to images.
As to the ‘we’ll fix it in post’ approach, to steal a term from the motion picture industry, the same criteria applies. Yes, you can fix an amazing number of things in Photoshop later. But you can only do it if you have enough data to work with. Plus you have to allow for the time involved. If you produce one off fine art images, perhaps it does not matter if you send several days on the computer fixing it, finessing it and adjusting it to perfection. If you are shooting hundreds of event images there is no way you want to do much at all.
There are also real boundaries on what you can fix in postproduction. These boundaries will differ from person to person, based on their Photoshop skills and on their understanding of what they are trying to duplicate. For example, many people could simulate the radial blurring that a Lensbaby (see my review here and its use in infrared here) produces. This requires a duplicated later, radial blur and then a soft layer mask. However, the Lensbaby does other things on camera. There is a redistribution of highlight and shadow levels, there are the depth of field effects and other optical effects caused by the lenses used (these differ between the original Lensbaby and the better optics in Lensbaby 2.0 and 3G), plus others I h
ave not covered. All of this leaves me preferring to use a Lensbaby in some
situations. Another thing that is in favor of in-camera is the direct ability to interact with the effect and the subject. This can allow you to adjust your shooting position for a certain superposition of elements, lighting, etc that you can only really tell from seeing the effect.
All that said, there are many times when I do things in post. I may have been out with only limited gear and come across something that really needs gear I left at home. It is then that I will do what I can to capture an image or images and then fix it later in Photoshop. Of course there are also my large image composites where I combine 10s’ (occasionally 100′s) of images to make a scene that does not exist. And sometimes I’ll want to really hammer the hell out of an image ad see where it takes me.
There is a place for both in camera and post camera work. The wise photographer does it all in a way to maximize the quality of the result they get, whatever that may mean for their own photography.