After the Shoot, Part 1: Import

This ongoing series of articles covers what do you do after you have taken the picture with your digital camera.
When it comes to digital workflows, taking the picture is only the start of so many options. In this series of articles we will explore the options. So we can make some headway with a huge topic, we will assume you are using a fairly recent digital camera and that your interest in photography is a pretty serious one. Over this series we will cover a range of workflow options, including ones using Photoshop, Lightroom and Aperture, as well as other special purpose software and plug-ins to handle things like noise.

When I return from a shoot I have three initial priorities: download, backup and quick scan.

You would think downloading your images is an easy process but there are even a good number of options here. Firstly you can download from the camera just using a suitable USB cable or you can remove the memory card and use a card reader of some description. The first is simple, the second is much better if you use a lot of cards during a shoot (as I typically do) and want the fastest download. I’ve found many cameras can be positively slow when downloading directly, though this is far from true of all cameras.

After the Shoot Lexar
Also if you have UDMA compatible memory cards and a UDMA card reader (see B&H Photo UDMA products ) you will get much faster downloads than most cameras will do.

Also part of the downloading question is what software will you use? You can use the operating system and drag and from the files from memory card or camera to somewhere on your computer. When doing this, the camera’s USB mode needs to be set so that it just looks like a disk to the computer. You can use the software that came with your computer (such as iPhoto on a Mac), the software that came with your camera or some other program, such as Lightroom or Aperture. There are subtle differences between these approaches. When you use the operating system you have total control over where the photos go but then need a second step to import them into whatever program you are managing them in, such as Lightroom. Using software to import them and catalog them makes it one step but, depending on the program, many not give you all the control you want.

Making an immediate backup of your images should be the next step because in the process of importing them you may have cleared them off your memory card(s) and so only have one copy of the images. You can immediately backup to CD or DVD. You can backup immediately to a second hard disk drive, either attached to your computer or to the local network. Eventually you should do both. I backup across three disk drives, my computer plus two external drives. I then backup to DVD later at a convenient time. Please note that disk drives do fail. Do not listen to manufacturer’s hyp, drives fail and unless a unit is a suitable RAID system that provides redundancy to survive a drive failure, even a drive labeled as a backup drive is a risk. So you always need your images on at least two disk drives, and, as soon as you can, backed up on DVD and stored away from the computer (in case of fire or theft) as well. When you backup to DVD you should have a system in place so that you can find the right disk. This can be software based, by using something like Extensis Portfolio, or as simple as a log book. Just make sure you have a system that you use.

After the Shoot: Bridge

Giving your images a quick scan to see how they look is usually an urgent need for a photographer. You can use the software you used to import the images, the operating system (sometimes) or another program, such as Adobe Bridge. Remember that some programs will build their previews and save them, while others will generate them every time you open the directory. This affects your revisit time. Of course, those that save the previews use up more disk space but unless you are on a laptop this should not be much of an issue.

After the Shoot Apple Aperture

After the Shoot Adobe Lightroom

Next article in the series: Basic image adjustment in Adobe Camera RAW.

Depth of Field – How Does It Really Work?

Depth of field is one of the least well-used aspects of photographic control. Yet it really is very simple to get your head around.

Depth of field

A camera lens will actually only focus one single, flat (if it is a good lens) plane perfectly. As you move away from the plane of sharp focus, objects become gradually more blurred. In practice we can tolerate a small amount of blur (called a circle of confusion, from the blurred circle of light you get if you focus a point source of light, like a star). How much blur we can tolerate is determined by how much we will blow up the image in printing or projection. Common values for this circle of confusion range from 0.025mm to 0.033 mm. The reason larger format images appear to have larger depth of fields is because you do not need to magnify them as much to get a resulting print size.

Depth of field

Aperture F number (or the f stop) is calculated by dividing lens focal length (fl) by the diameter of the aperture (a) (F number = f.l. / a). What this means is that for a given F number, a telephoto lens (long focal length) will have a larger aperture diameter (if you like, size of the front element of the lens reflects this) than will a wide angle lens. That’s why an f2.8 28mm lens is not as physically wide as an f2.8 400mm lens. For depth of field, it is actually the lens aperture diameter and not the focal length that matters, but you can see from the above how we can effectively think in terms of focal length because of the relationship between F number, aperture and focal length.

The basic lens equation is 1/subject distance + 1/focal plane distance = 1/focal length. The focal length (fl) is the distance from the lens that a subject at infinity will be brought to focus. Subject distance (s) is the distance from the lens to the subject we have focused on and focal plane distance (fpd) is the distance from the lens to the film or sensor plane in the camera. The above equation explains why a lens extends as you focus on closer subjects (fpd must get larger to compensate for s getting smaller, since fl remains constant). It also explains why adding extension tubes or a bellows to a lens allows it to focus on closer subjects (it increases fpd or focal plane distance).

A point of light is not, in practice, brought to a single point on the film plane, but to a tiny (hopefully) circle. The size of this tiny circle of blur (circle of confusion) is defined by the diffraction characteristics of the lens (and its aperture) and by the quality of the optical corrections in the lens. In many cases it is not, in fact, a perfect circle, due to the actual shape of the aperture and to any aberrations in the lens. As an object moves out of focus, this circle of confusion gets larger. One aside here – some lenses are marketed as having great out of focus blur, by having a carefully designed aperture iris that is as close to a perfect circle as the engineers can make it. It offers more pleasing out of focus images.

Depth of field

Images appear to us to be sharp when this circle of confusion is smaller than we can resolve with our eyes. This explains why an image can look sharp from a distance but becomes blurred as we get closer to it, we are finally close enough for our eyes to resolve the circle of confusion. Thus there is no such thing as a completely sharp image. The closest we can get is a photograph of a completely flat object, like a map or painting. Even here, there will be a fundamental level of sharpness caused by the lens characteristics.

When you focus on a subject at distance s, an object closer to the camera (sn) will be brought to a focus further from the lens (behind the film plane). This means that at the film plane the circle of confusion will be larger. A subject further from the camera (sf) will come to a focus in front of the film plane. This also means that at the film plane the circle of confusion will be larger. The size of the circle of confusion turns out to be directly related to the physical aperture of the lens and how far the subject is away from what we have focused on. What this means is that to maintain a certain maximum circle of confusion size (effectively how sharp we want the image to look), as we increase the lens aperture (or the lens focal length) we get less distance off the focal point in acceptable focus.

Circle of confusion is depth of field

So what all the above translates into is the following:
*    For a given lens, you get a greater depth of field as you stop down to smaller apertures (go from f2.8 to f11, say)
*    At a given aperture number, say f2.8, a telephoto lens will give you less depth of field than a wide angle lens, because the physical lens aperture will be larger for the longer focal length lens. This is provided you keep the lens to subject distance the same
*    The actual size of the depth of field decreases as the camera gets closer to the subject it is focused on (it can be 10 feet or 3m at a distance and only inches or centimeters up close)

Depth of field of a 50mm lens
Shot with a 50mm lens

Depth of field with 100mm lens
Shot from the same distance with a 100mm lens

Depth of field of 100mm lens
Shot with a 100mm lens from twice the distance

All the above also explains why compact digitals seem to have a much greater depth of field than digital SLRs. For a given effective focal length (say 50mm in 35mm camera terms), a camera with a smaller sensor will use a smaller focal length to achieve this than a camera with a larger sensor. Given the smaller focal length, at a given F number, the smaller sensor camera will use a smaller aperture, giving a larger effective depth of field. This is why many complain of not being able to use the same shallow depth of field techniques that we are used to using with 35mm cameras for things like portraiture.

For those who want a more mathematical discussion, see Norman Koren’s excellent article.

Legal Issues and Photographers

Lately lots of questions about the law and how it applies to photographers have been appearing on the mailing lists I belong to. Here are some resources.
There are many legal issues that can crop up for photographers and people using photographs in artwork or for other purposes. Two great resources for these questions are some web sites run by lawyers who are also photographers.

The first is Bert P. Krages web site.
He is a lawyer who is also a photographer and has prepared a PDF
document of your rights to photograph that you can carry with you
in case of problems. It is for the US but a link to one for the UK is on
his site.

The other is,
which contains great information. A blog by a lawyer, Carolyn Wright,
there is great information here for those with questions about the
various laws and how they apply to photography. Another US site.

I’ll add more to this list as I become aware of them.

Preparing a web page using Photoshop and ImageReady

In this tutorial we examine how to create a web page by using the slice tools in Photoshop and ImageReady.
In this tutorial we are going to use Photoshop and ImageReady to create a web page using rollover buttons and the slice tool. Note that this is only one way of many that can achieve the same result but it does illustrate the use of the facilities in Photoshop and ImageReady. Note this is a long page.

1.    Prepare your background image to the right size.

2.    Position guides to make creation and positioning of content and slices easier

3.    Create content for the page design

4.    Place your initial text which is going to form your buttons

5.    Switch to ImageReady

6.    Use the Slice Tool to generate slices for all parts of the page that look like they could use different compression methods and for each of the buttons

7.    Rename the slices to be meaningful by double-clicking on their names in the Web Content palette

8.    Duplicate the button layers so that you have one for each rollover state you wish to use. In this case it is three: normal, over and click. Rename the copies to reflect the button status

9.    Modify the layers to reflect the different effects you want. Remember that if you use layer styles, as in here, you can copy and paste the styles between layers to ensure a consistent result

10.    Now turn off all the button layers except the normal state

11.    Now select one of the slices in the Web Content palette and click the Create Rollover State at the bottom of that palette.

12.    Now turn on the layer that gives the display for that state

13.    Now repeat for all the other buttons and states. By default the next state after Over is Down. If you right or control click on the state name you will get a small menu that allows you to pick from the other states.

14.    At this point you can preview it in a browser to make sure it works. The most common mistakes here is to not have the right layers visible in the correct states. To correct just click on the state you need to change and then turn on and off layers until you have it right

15.    Now for each slice define the URL it is to trigger a jump to, the Alt tag, etc.

16.    The last real thing is to optimize each slice by selecting the compression method and degree. You can work on multiple slices at the same time by shift clicking on the slices

17.    Lastly we do a File Save As and make sure we save HTML and Images. Save these to your site folder or
directory and then you are ready to work further on it in
Dreamweaver or some other web design application to add the content to each page.

Monochrome Part 1

Convert Color Images To Monochrome – Which Channel To Watch

Monochrome, black and white or grayscale images are sometimes far more
effective than color ones. Ansel Adams’ work is a good example, as is a
Hitchcock movie. However, since most film shot is color and all digital
cameras capture full color images, there is usually a requirement to
convert the images from color to monochrome using your favorite
image-editing program. As we shall see in this article some ways of
doing this are better than others. Part 2 will cover how to
re-introduce selected color into an image.

When professional photographers shoot monochrome film they rarely do so
without a colored filter attached. The reason for this is that film
responds differently than one might expect. Film (and the CCD in your
camera or scanner) is more and less sensitive to different parts of the
light spectrum. They are also sensitive to light outside the visible
range, such as ultra-violet and infrared. All this means that shooting
a scene on monochrome film without a filter doesn’t work satisfactorily
most of the time. The common filters we use are red or orange filters
to darken a blue sky or a green filter to improve the look of green
foliage by lightening it. Unfortunately the simplest way to convert a
digital image to monochrome is (in Photoshop) to do a mode conversion
to grayscale. Here all the color data is averaged to produce a
grayscale image. This also happens with digital cameras that have a
monochrome mode. The result is similar to using monochrome film without
a filter. It works well sometimes but we can often do better. A color
digital image is really three monochrome images (usually called
channels), one shot through a red filter, one through green and another
through blue. Often we can more easily produce a result that better
communicates what we want by using one or two of these channels only
when we convert to monochrome.

What To Do

Here is a sequence to go through when you wish to convert any type of image from color to monochrome.

1.    Setting things up

Open your image in Photoshop (or any similar program like Corel
PhotoPaint), save a copy of the image with another name and then open
this copy as well. This gives you two versions of the image that you
can treat differently. You can then compare which is better.

2.    Get a baseline

Convert the copy image to grayscale by doing Image -> Mode ->
Grayscale. Resize this version so that you can keep it visible on the
screen while you work on the other copy. This provides a base reference

3. Show Channels

If you don’t already have it open, bring up the channels palette using
Windows -> Show Channels. You will notice that a color image (RGB)
has four entries in the channels palette, the top one showing the
combined RGB image and three more for each of the red, green and blue
channels individually (assuming you are working in RGB).

4.    Seeing red

Click on the red channel in the channel palette. This turns off all the
other channels and you see a grayscale image that is made up of only
the data from the red channel. In the red channel red objects will
appear light, blue objects dark and green objects grayish. Compare what
you get with the copy of the image which you did a straight grayscale
mode change on.

5.    Green with envy

Click on the green channel in the channel palette. This turns off all
the channels except green. In the green channel green objects will
appear light, red and blue objects darkish. Compare.

6.    Feeling blue

Click on the blue channel in the channel palette. This turns off all the channels except blue.

7.    Alternatives

An alternate way to work is to execute a Split Channels command from
the channel palette’s menu accessed by clicking on the right arrow at
the top right of the palette. This creates three new windows called, and This can be useful with some images
where all three channels provide useful, but different, renditions of
the scene. I tend to prefer this to other approaches.

8.    Evaluate

Switch backward and forwards through the channels (or the split files),
comparing them to the mode changed version. What parts of the image are
important for your intended use? Which channel gives the best rendition
of it? What parts of the image detract from your intended use? Which
channel makes it least noticeable? You should always have a clear
purpose for an image in mind. It could be as simple as showing the
beauty of a place or it could have a complex existential, political,
social or environmental message. Whatever it is, keep this in mind as
you are evaluating your options.

Remember that splitting channels is very similar to using colored
filters over the lens when shooting B/W film. A red filter is like the
red channel, etc.

9.    Execute

If one of the channels gives a better rendition than the straight mode
conversion ensure that only the channel you want is visible and then do
an Image -> Mode -> Grayscale conversion. You will be prompted as
to whether you wish to discard other channels. Do so. Alternatively, if
you did t
he Split Channels approach earlier, just close without saving
the versions you don’t want.


Landscapes with lots of green foliage are a very common subject for
photography. Taking the green channel from a color image can give a
great result. Using the green channel adds more separation and modeling
into the foliage. This is because the well lit part of the plants will
be mostly green, thus light in the green channel, and the shadow areas
and other objects will have less green and more of the other channels,
thus being darker in the green channel. Dropping the other channels
thus increases the contrast in the foliage.


Skin blemishes, moles or acne often spoil shots of people. Unlike
landscapes where we frequently want more contrast to more clearly
separate objects, with people we usually want less contrast in the skin
tones. Since most skin has a fair amount of red in it, including
blemishes, the red channel can be a good one to use because the
blemishes will also have a substantial content in the other channels.

Step 1

Examine the individual channels. In this case the red channel looks the most promising.

Step 2

Convert to grayscale using just the red channel, or do split channels and use the one you want.

Step 3

Apply any other manipulations required, in this case curves.


Skies can be a key aspect that makes or breaks an image. A good way to
increase the drama in a sky is to use the red channel. In the red
channel a blue sky is very dark. This makes white clouds stand out more
clearly. On the other hand atmospheric haze, one of the key ingredients
in aerial perspective (the graying or bluing out of things with
distance) is mainly in the blue channel. Thus you can increase the
sense of distance by using the blue channel.

CMYK and Other Things

So far we have worked with RGB. Yet there are reasons to work in CMYK.
CMYK offers four channels to choose from. Also the color ranges covered
by the CMYK channels can better suit some objects. For instance, many
Australian plants, such as the gum trees common throughout Australia
and also California, have a lot of blue coloration in their leaves as
well as green. For such plants the cyan channel in a CMYK version may
work better than a green one in RGB. If you can’t get the effect you
want, convert to CMYK and then try.

There is also no reason why you must use only one channel in producing
a grayscale image. For some images one only needs to remove one channel
to get the result you want. You do this by clicking on the eye next to
it in the channels palette to turn off its visibility. Some images may
also benefit from differential treatment in different parts of the
image. So you might want to use the green channel for the landscape and
the red one for the sky. A bit of masking can let you achieve this
easily. Indeed occasionally you will get an image that really benefits
from being divided up into parts and having each part treated
separately. This is really akin to the way we used to work in a
monochrome darkroom.


Digital cameras tend to put more noise into the blue channel than into
the others. This is because the sensor is less sensitive to the blue
end of the light spectrum and thus needs to be amplified more. This
also amplifies the noise. What this means is that even if the straight
mode convert version works well for you, it may be worth turning off
the blue channel before converting to grayscale in the interests of a
smoother image.

Monochrome Part 5 Digital Hand Coloring

The hand coloring of images is an old process, but doing it digitally offers so many advantages.
Digital Hand Coloring

Digital hand coloring is a different process, though similar. Here we
have two options. The first involves a direct digital analogue of
traditional hand coloring, where we use the paintbrushes to apply color
to an image using a blending mode that allows the underlying tonal
values to show through. The second involves the full use of PhotoShop’s
selection and masking abilities to isolate parts of the image for color

Let’s walk step by step through the first process.

Digital Painted Hand Color

Here is one sample process:

1.    Open your image to be hand colored in Photoshop.

2.    Convert it to grayscale if necessary, then convert to RGB.

3.    Adjust the contrast and brightness of the image to
suit your aim. Use Adjustment Layers so that you can adjust this as the
color is added if it is necessary.

4.    Create a new layer above your background image.

5.    Choose your color.

6.    Change the layer’s blending mode to Color.

7.    Pick a soft brush and start painting onto the layer.

8.    I like to use different layers for each main color
and/or area. This makes it very easy to adjust the color by use of
Adjustment Layers.

9.    Keep building the image up until you get what you want.

10.    The last step will usually be to adjust the
opacities of your color layers to give exactly the right effect you are
after. (7)

Using Selections and Masks

This method works the same way as above except that we make use of
selections and masks to control where the color is applied rather than
painting the color in carefully by hand.

Monochrome Part 4 Split Toning

Split toning is where we get a different color effect in different parts of the image.
Digital Split Toning

Digitally we work exactly the same way as in the previous tutorials but
we need at least two different color layers or adjustment layers. For
this example we will use a warm and a cool tone.

1.    Put your monochrome image into RGB mode.

2.    Pick your warm color.

3.    Create a layer, fill it with the warm color and change the blending mode to color.

4.    Turn off this color layer.

5.    Pick a cool color and create a cool tone layer. Set the blending mode to color.

6.    Now what we need to do is to find a way to blend
these two, so that the warm tone affects perhaps only the highlights
and the cool tone only affects the shadow areas, with a subtle blending
between the two.

7.    One way is to hand paint the masks for each color layer, but we will use a different approach.

8.    What we will do is create a selection for the shadow areas. We do this with Select -> Color Range.

9.    Select Color Range gives us lots of options. In
this case we are selecting based on the shadows. Clicking OK will give
us a selection.

10.    Now with the cold layer selected we create a layer mask based on the selection.

11.    The result is the blue only in the shadow areas.

12.    Now whilst it is not strictly necessary in this
example, you might want to do it if you were using more than two
colors, we will also mask the warm color. We do the Color Range again
for shadows but then invert the selection before creating the layer
mask by going Select -> Inverse.

13.    Now we use the Opacity settings on the color layers to tone the effect down as desired.

14.    If you compare the before and after effects you can see the potential of this technique.

Monochrome Part 2 Digital Toning

Your monochrome images don’t have to remain colorless. Adding selective color back into an image gives you total control of the crafting of your image, something photographers crave. This allows you to introduce exactly the color(s) you want.
Conventional coloring of photographic images can be in one of two
forms: toning and hand coloring. Toning involves either the overall
coloring of an image through a chemical treatment of a print or
multiple coloring through ‘split toning’ where several different colors
are produced in different parts or tones of an image through successive
treatment of a print. Hand coloring requires the printing of a black
and white image, perhaps a little lighter than normal, and then the
application of color. The color can come in the form of color pencil,
watercolor pencil, watercolor paint, oil paint or inks. There are even
specific products available designed just for the hand coloring of
photographs. This process can be quite laborious.

Hand coloring is a traditional process that was developed before color
photography was technically available to meet the demand for color
photographic images. At one time this was a huge employer of talented
women (usually), with most black and white portraits hand colored.
Traditional hand coloring produces lovely pastel-toned images, usually
of people or romantic subjects, like flowers. A recent revival in the
hand coloring of conventional silver-gelatin photographs has occurred,
being most visible in various series of greeting cards with nostalgic
images of 50’s and 60’s period cars, etc.

In this and following tutorials in this series we will cover a range of
techniques. In software like Photoshop, Paint Shop Pro or PhotoPAINT
there is rarely only one way to do something. In Part 1 we showed one
way to convert a color to a monochrome image. Later in the series we
will revisit this and cover some of the other ways. Likewise in this
article we will give one way, the author’s preferred way, of toning an
image digitally. In part 3 we will show a different way.

Digital Toning

Digitally producing an overall coloring to a monochrome image is a very
straightforward process. The steps boil down to the following:

1.    Open the monochrome image or convert a color image to monochrome.

2.    Convert the monochrome image back to RGB mode.

3.    Create a Hue/Saturation Adjustment Layer. In this
case it really doesn’t matter if you tie it to the underlying image
layer or not.

4.   Click the Colorize checkbox so that you can add color to
the image.  In the H/S dialog the Hue controls the color that you
‘tone’ the image.

5.    The Saturation (and Lightness) controls allow you
to further adjust the toning from a very strong to a very subtle effect.

6.    The beauty of Adjustment Layers is that you can
come back anytime and make further adjustments, as here where we have
gone from a cool to a warm tone.

Monochrome Part 3 Digital Toning V2

Digital Toning Another Way
There are so many ways in Photoshop to achieve the toning effect. Here is another one.

In this approach we will take the color from another image and use that
to ‘tone’ the image. You might want to do this to create a coherent
color scheme between a group of images or to match particular
furnishings in a decorative situation. The steps boil down to the

1.    Open the image whose color we need to sample.

2.    Use the eyedropper tool to pick the color we want,
in this case the hot pink from the flower for shock value. Optionally
save the foreground color in the Swatches Palette. This is very useful
if you are working on a series, as you can save the Swatches for use

3.    Open the monochrome image or convert a color image
to monochrome. Convert the monochrome image back to RGB mode.

4.    Create a new layer above the background image and fill it with our chosen color.

5.    Change the layer’s blending mode to color and suddenly we have a toned, if garish in this case, image.

6.    You can use the Opacity control to lower the intensity of the toning without shifting the hue of the tone.

Choosing the Right Screen Ruling

This article covers screen ruling selections for a variety of print processes for the printing industry and pre-press professionals.
Today’s advanced screening technologies present the print buyer and
prepress manager with a seemingly endless variety of screen algorithms
from which to choose. How then does one choose the right screen ruling
to deliver the best results on press? The answer: it depends. The
following outlines the options available and their best fit.

Factors Influencing the Status Quo

Traditional rosette-based amplitude modulated (AM) screening had been
virtually unchanged from its inception in the late 1800’s, until the
advent of super-cell AM screens in the early 1990’s. Soon these modern
AM screens, such as Agfa’s Balanced Screening (:ABS), became the

In 1993, two events turned conventional wisdom upside-down. Stochastic
or frequency modulated (FM) screens, such as :CristalRaster, were
introduced. And the first platesetters from companies such as Gerber,
Optronics and Creo, (now Esko-Graphics, ECRM and Eastman Kodak
respectively) became commercially available.

Traditional Screening

AM screens vary the size of the dot on an established grid or
line-screen ruling to change the tonal value. The finer the grid, the
higher the frequency or number of dots and the closer  the rows of
dots are to each other. Varying conditions of the prepress process and
the types of presses being used limit the screen ruling. The printing
process, therefore, determines the choice of a traditional AM screen
ruling; it is not solely a decision of preference. 

Stochastic Screening

Stochastic screening enabled new levels of detail. Previously, and at
an imager resolution of 2400, the finest AM screen ruling possible that
could deliver a continuous 1-99% tonal range, was 240 lpi. Considering
the standard screen ruling for magazine production is based on 133 lpi
(still today’s SWOP standard), the ability to deliver screening beyond
133 lpi or 240 lpi seemed revolutionary.

While mezzotinting  and stippling (precursors to  stochastic
screening in etching and engraving) were popular as far back as the
American and French Revolutions, the concept of modulating tones by
controlling the frequency or number of dots (FM), rather than varying
the size of the dots (AM) was indeed revolutionary. Stochastic
screening in a PostScript workflow was the first method to faithfully
reproduce a broad tonal scale at high fidelity with line-screen
equivalents of 300, 350 and 400 lpi.

The trick in today’s advanced screening algorithms is to control the
highlight and shadow detail in an FM fashion, utilising no smaller dot
than the process can easily hold. Often one hears how a magazine
manufacturer has settled upon a minimum-sized dot of 28 microns, which
equates to a 2% dot at 133 lpi.

SWOP standards were defined around best practices in magazine
production. Such conditions dictated that in order to print a 2-98%
tonal range, the finest dot the process could hold was a 2% dot at 133
lpi, or 28 microns. And yet, with the inherent variables of a
film-based workflow, consistently holding a 28-micron dot was a
challenge, let alone 14 or 21.

The Arrival of CtP

As PostScript-based advanced screening algorithms were challenging the
practical limitations of traditional film-based workflows, another
revolutionary technology arrived: computer-to-plate (CtP). CtP devices
were designed to reduce the steps and variables in delivering dots to

At first, CtP delivered several production benefits to the printer.
However, it was the simple removal of variables (no film imager or
chemistry fluctuations, no exposure-frame draw-down issues, no
exposure-frame lamp and timing variables, etc.) that proved to be the
factor that enabled the marriage of two revolutionary technologies: FM
screening and CtP.

FM Screening and CtP

With the marriage of FM screening and CtP, prepress systems could now
push the envelope of what these two technologies could deliver. What
before had been a nearly impossible task – to hold a 1% dot at 240 lpi
on plate – was now feasible. And because the addressability of standard
2400-dpi devices maxed-out at that 1% dot (10.6 microns), FM screening
seemed a natural fit.

Regardless of the first order FM distribution (random), or second order
FM (variable dot size placed into mid-tone swirls or worms), FM
screening was too grainy, especially in the mid-tones where the
frequency caused dot clumping, and it was still difficult to manage on

1% pseudo ‘AM’ tone at 300 lpi and 2400 dpi

The Inherent Benefit of AM Screens and Disadvantage of FM Screens

FM delivers finer detail than AM screens. However, this is the benefit
of the small dot or high-frequency and not the random distribution.
Until they max-out at 240 lpi, AM screens deliver smoother flat tints
than FM, and are more forgiving on-press than FM screens. It is also
easier to control grey balance with AM screens.

While some argue that using ink density to control mid-tones should be
the exception and not common practice, they do agree that FM does not
respond to density adjustments. So, the challenge to the industry was
to come up with a screening algorithm that combined the best of FM
(higher fidelity, and more consistent highlight and shadow details)
with the best of the AM world (smoother flat tints, greater operating
latitude on press).

XM Screening: The Best of Both Worlds

The problem with hybrid screens however, was the visible crossover
where FM and AM meet. The challenge was to combine the two technologies
seamlessly, without noticeable intersections. XM or cross modulation
screening provided the solution.

Agfa’s :Sublima XM technology applies a common sense approach to
advanced screening: match the screening to the pressroom environment,
rather than changing the pressroom to match the screening requirements.
XM screens take into consideration the type of paper typically used
(coated, uncoated, recycled, newsprint, etc.), the printing
architecture (sheet-fed, heat- or cold-set web, flexography), and other
variables (such as typical ink tack, blanket release etc.). XM
screening works within the established parameters and uses the smallest
optimised and printable dot for the application.

As you can see in the example below, the smaller and higher-frequency
XM dots to the right are still placed along an established AM grid, but
no smaller dot is used than can be easily held within this press
condition (in this case – 28 microns for heat-set web).

So, What Screen Ruling Should I Use?

The question is not really what screen ruling, but rather, “What is
minimum sized dot I can easily print?” This smallest dot size varies
based on press architecture and typical press en
vironment. The higher
the line ruling, the higher the risk of dropping the highlight detail
on press yielding blotchy or posterised effects. So, by establishing
the smallest sized dot that can be easily held, the next task is to
ensure a full tonal range.

XM screens deliver a full tonal range by using AM screens in the vast
mid-tones, and then converting to an FM (but not randomly distributed)
placement of the dots in the highlights and shadows. XM and FM
algorithms deliver 1-99% tonal ranges by placing (or leaving) fewer
dots of that optimised and minimally-defined dot. So just what size is
that dot?

The Magic Number for HeatSet Webs: 28 microns

The smallest sized dot depends upon the application. Magazine printers
have optimised their operations around a 28-micron size minimum dot,
which equates to a 2% dot at 133 lpi. However, at 175 lpi, that 2%
equates to a 21 micron sized dot, a size that might work well for
sheet-fed presses, but presents a challenge for typical heat-set web
environments. Therefore, XM screening algorithms tend to find that a 28
micron dot works well (2×3 pixel for a 2400 dpi device, or 2×2 pixels
for an 1800 dpi device). So, instead of a traditional standard of 150
lpi, with XM screening, heatset web printers find that they can nearly
double the resolution – up to 240 or 250 lpi, with no extra effort on

The Magic Number for ColdSet Webs: 35 microns

The issue for newspapers is not the imager quality, the quality of the
plates or even the ink. The newsprint substrate is the single aspect
that defines the screening parameters. By using a minimum dot size that
ranges between 35 and 40 microns, newspaper publishers are realising
the benefit of advanced screening technologies, without having to
change the pressroom. From what used to be a maximum standard of 100
lpi, newspapers are now attaining 180 lpi. And they accomplish this
without reducing dot size, but by simply using XM screening.

The Magic Number for Sheet-fed Presses: 21 microns (but it depends)

The sheet-fed environment in general is quite standardised, and
products such as :Sublima have been carefully formulated to ensure the
customer makes the right screening choice.

With :Sublima, Agfa engineers have assembled compensated screen sets
with pre-established minimum and maximum dots and frequencies based on
a variety of imager and plate characteristics, in combination with a
variety of press environments.

Provided a shop can consistently hold a 2% dot at 175 lpi, then that
dot size equates to a 21 micron dot size. Therefore, XM screening
algorithms based on 21 microns are quite popular in the optimum
sheet-fed environment. However, should the printer use a recycled
stock, then a bit larger minimum dot – 28 microns – should be the
default. Again it depends. With an XM technology, standard 21-micron
screen rulings exist at 210, 240, 280 and 340 lpi.

Regardless of technology or environment, one aspect rings common:
optimised process control is a must, and today’s CtP technologies help
to stabilise the environment.

When should one use 240 lpi versus 340 lpi?

Considering that a given combination of stock and ink sets can easily
deliver a 21-micron dot to the press sheet, then why not always use the
finest 340-lpi screen? A finer line screen cannot uncover what is not
there. However, it does allow you to get more detail from larger image
files that do have more information. With today’s rasterising speeds,
processing is not an issue but image archival and retrieval overhead
may be.

The finer the screen ruling, the more XM behaves like FM. FM can
deliver fine detail, but FM dots also resist on-press colour

At a normal viewing distance, it is difficult to tell the difference
between 240 and 340 lpi with the naked eye. Yet ink density and
reflectance can generate a greater measurable colour gamut or
brilliance with finer screens. And upon closer examination, one can see
differences in detail between 240, 280 and 340 lpi screen rulings.

With XM screening, when using 21-micron based 240 or 340 lpi screen
rulings, the dot size in the highlights and shadows are the same: 21
microns. At 240 lpi, the tonal range between 1% and 4% is built from
the same-sized 21-micron dot. With 340 lpi, the tonal range between 1%
and 8% is built from frequencies of those same-sized dots. Due to the
increased line ruling, and based on the AM aspect, 340 lpi mid-tone
dots are naturally smaller than the 240 lpi mid-tone dots.

These smaller 340 lpi dots in the mid-tones, and their lower ink
density yield a narrower on-press operating latitude than their 240 lpi
counterparts, and yet both allow for more on-press management than
traditional FM dots. Therefore, 240 lpi XM is more forgiving on press
than 340 lpi XM, and 340 XM is more forgiving on press than FM. But
whether the XM screen is 210, 240, 280 or 340 lpi, no smaller dot is
ever needed than that 2% dot at 175 lpi within conventional AM

Both 240 and 340 lpi XM screens are designed to work well within the
capabilities of the standard sheet-fed press environment. True, there
are indeed subtle and at times valuable differences in the rendering of
the finest image detail and the brilliance of the hues with finer
frequencies, but the practical difference between the two is that on
press, the finer the screen, the narrower the press latitude.

The Choice is Yours

Today’s advanced screening technologies prove to be a perfect fit for
today’s CtP technologies. Since XM screening algorithms combine the
best of both the AM and FM worlds, the matter of best fit depends on
stock characteristics and how much flexibility one desires on press.

It has taken over 250 years for imaging and screening technology to be
optimised to match the performance characteristics of the printing
process. With today’s XM screening, print buyers and printers can
choose not from a position of inherent system limitations, but rather,
from an optimised screening palette based on personal preference and
ease-of-use. The choice is yours.