An interesting feature of the latest version of Photoshop is the ability to use the alignment and merge capabilities that Photoshop uses for panoramas and HDR to extend the depth of field of your images. We look at how it stacks up.
We are still coming to terms with the features in Photoshop CS4, and indeed in the rest of the suite. One that intrigued us was the ability to extend the depth of field of your images. So we decided to make this the feature we would have our first deep look at.
Extended DOF makes use of the image align and merger capabilities that are well used for panoramas and HDR imaging. In this use you load up a series of images taken with different points of focus into layers and then do an align and then a merge.
The key to success with this approach is to make sure that each image has a substantial overlap of sharpness with both the image before it and the one after.
Let us start off with an ideal subject and then get more complex.
This is what can happen when you have too few images without any or enough overlap of sharp zones.
Step by step, this what you do:
1. Select the images in Bridge and go Tools-> Photoshop ->Load Files into Photoshop Layers
2. In Photoshop, select all layers
3. Do Edit -> Auto Align Layers
4. Do Edit ->Auto-Blend Layers being sure to select Stack Images and Seamless Tones and Colors options
5. All done
If you start with a series like this
you end up with this
Now the above is an almost ideal test, with a smooth, continuous distance change up the image. It is now time to look at some more challenging situations.
Here is an obvious situation to try this technique. Even at f32, with our 100mm lens and the camera position we are using, we can’t get it all sharp, from the back of the chair in front to the distant window.
So I took three shots, the one above, one nicely focused on the vase and beyond and one on the more distant door and window. Put together in Photoshop we get the result below, a completely effective result.
You can see the three layers with the resulting mask.
There are some possible side effects of this process. If you have dust on the sensor then since the perspective changes slightly as you focus, when the software does the alignment you can get a sequence of dust marks, as you have in the detail below.
Another artefact can be found if you resize and then flatten, as opposed to flattening the layers and then resizing. This can be seen in the image below and seems to come about because of the way the mask is generated. Rather than when a top layer covers an area just leaving that area on the layer below, which is what you might do if doing this process manually (yes it can be done manually), Photoshop breaks the whole image up into a jigsaw puzzle of pieces across the layers. At least in the beta I am writing this from there seems to be a slight issue in resizing between the masks and the image parts themselves that produces this slight halo around the edges. We will see if it is fixed in the shipping version. But you can avoid it anyway by flattening before resizing.
So, provided you have a small number of images and a lot of sharp overlap from image to image, it works. But what about on something really tough?
When I first saw this feature I immediately thought about applications in macro photography.
Below we have an image of amber I shot at f16.
Here are a couple of 100% sections:
So I wanted to see if I could get sharpness through more of the depth of the amber. I shot 11 images, slowly moving the focus deeper into the amber. The result is below
Below are two 100% sections:
This clearly did not work but I made it really hard on the software, using f4 for the individual shots.
For this new series of images I used f5.6, still a tough test.
The result is excellent from combining the 20 images is excellent. I allowed even more sharp overlap from image to image.
“https://www.dimagemaker.com/ktml2/images/uploads/cs4/dof/20.jpg?0.09800907060671438″ align=”” border=”0″ height=”433″ hspace=”0″ vspace=”0″ width=”650″>
This shell fossil (below) worked extremely well from 10 images.
One interesting thing with it, though, is that we can see in the 100% section below that specular highlights are not handled so well.
Lastly I decided to revisit the amber. The shot below was taken at f16. Below that are several 100% detail areas.
By combining seven such shots taken at f16 we get the image below and then the 100% sections below that. By the way, the color of the amber was removed by the auto white balance of the camera.
This worked much better than the first attempt, but is still not perfect.
So what I think about this new feature of Photoshop CS4 is that it does, in fact, work. Yes you do need to be careful using it and the larger the inherent depth of field of the images it works with the better. It also does not work on all images. I also have come to the conclusion that the fewer images to be merged the better, which also works in favour of getting the greatest depth of field you can in each image. What this means is that you can shoot at the sharpest aperture, rather than the one that gives you the greatest depth of field, and then combine several images to end up with a greater depth of field than the smallest aperture will give you with a sharper result by avoiding diffraction effects.
It does take practice and it is not a rescue for bad photography in the first place, but it is a useful extension and another good example of computational photography offering real advances.
Once I have the shipping version of the software I will check to see if its performance has improved .