Issues in Digital Camera Design

In this interview we talk to several people with a deep knowledge of digital camera design. While the interview is now two years old, the information in it is still highly relevant.
As professional photography moves from being predominantly film based
to predominantly digital, many of us are struggling to come to terms
with the key issues of digital sensors. For those of us raised on film
densitometry curves and Ansel Adams’ ‘The Negative’ there is a whole
new world out there and many of the parameters we took for granted have
changed radically.

When you move from an emulsion to a silicon chip sensor, whether CCD or
CMOS, things change. Major issues with film, like reciprocity failure,
do not exist with digital.

CCD sensors come in two major design types, Full frame and Interline.
Full frame CCDs attempt to use as much of the chip surface as possible
for the pixels (more correctly called pels, or picture elements).
Interline CCDs don’t use as much area for sensing because they use some
of the surface area to rapidly transport the picture data from the
CCDs. Thus, Interline CCDs are great for speed imaging situations, like
video or high frame rate sports photography, whilst Full frame CCDs
have greater sensitivity. Full frame CCDs, as used in most professional
digitals, offer the largest inherent imaging area as a proportion of
the chip size, thus they have the best sensitivity and lowest noise.
Certain CCD designs use a process technology called ITO, or Indium Tin
Oxide, which adds about a stop and a half of extra light sensitivity,
but around two stops in the blue channel, where CCDs are least
sensitive. ITO is used on the Kodak CCDs in their pro backs. The
general lower sensitivity of silicon sensors in the blue usually
manifests as greater noise in that channel, because the signal (and its
noise), have to be boosted more to maintain colour balance. This is the
reason many digital camera images benefit from having a Gaussian Blur
applied to the Blue channel only.

CMOS sensors, as used in the Canon D60 and the Foveon design, has less
of the chip area devoted to the light sensitive components, hence their
sensitivity is inherently lower. Because of this, CMOS pixels have a
lens built into the chip surface. This lens can cause reduced light
intensity around the edge of the chip with normal lens designs. It
manifests as greater noise around the outer edge of the chip. However,
CMOS offers other major advantages, like lower power consumption, lower
cost and greater ease of integrating other camera functionality onto
the chip.

Digital image sensors, whether CCD or CMOS, are prone to what is called
thermal noise. As the temperature of the chip increases the pixels see
more ‘spurious’ light which shows up as noise. For the technically
inclined, when light hits the sensor it releases electrons, which are
collected in the pixel well. Heat also releases electrons into the
pixel well. Since one electron is the same as another, there is no way
to tell the difference between these ‘thermal’ electrons and ‘light’
ones. This noise is swamped in short exposures with plenty of light. As
I proved for myself in tests, digital cameras can produce more visible
noise the longer they have been switched on, and thus the hotter the
circuitry is. Also the hotter the ambient temperature is the more noise
too. So, if you are shooting longish exposures in the outback in
summer, keep your camera in a cooler between shots. Whilst I don’t
believe any 35mm-based digitals do it, some medium format digital
backs, like the Kodak ones, also capture a dark frame in exposures
longer than 1/4 second. A dark frame is a shot of the same length as
the imaging exposure but with the CCD covered, so that the only ‘light’
it sees is the spurious ‘dark current’ noise. On other cameras you can
naturally do this yourself using a lens cap and Photoshop. Some digital
backs for medium and large format cameras use active cooling to keep
the CCD cool, and thus reduce noise. This adds bulk and significantly
increases power drain, but works excellently. CCD cooling was developed
by the astrophotography guys to remove noise from their very long, i.e.
one hour, exposures. We used to do this with film too, but there to
improve the reciprocity characteristics.

One thing that is starting to become an issue with digital capture is
inherent sharpness. Some of the higher resolution sensors have pixel
sizes around that of the circle of confusion of common lenses. Those of
you who remember your optics (you all do don’t you?) will recall that
the circle of confusion defines the resolving power of the lens. Many
photographers working with cameras at this leading edge, such as David
Meldrum, report that they get much sharper images with certain lenses
than others. This will be an increasing issue as more cameras use such
sensors and as the resolution of sensors continue to rise. Then just as
with the finest resolution films, you will need to be very choosy about
which lenses you use to get the sharpest result.

So maybe film and digital are not so different after all.

To get an additional perspective, we interviewed Kenneth Boydston,
President of MegaVision, and Mike Collette, founder and president of
Better Light, Inc.

 

Wayne: CCD vs. CMOS – Are there any fundamental issues between these
that make you see one as superior to the other? Why? Now or in terms of
future development potential?

Ken: As an image sensor, nearly everything about CMOS is better than
CCD except one very big thing: Signal-to-noise ratio.  For an
equivalent signal, CMOS has always been noisier, which has limited its
use to lower end applications.    Because of the
numerous advantages of CMOS, silicon designers are motivated to drive
down the noise, and over the last few years have done so.  At the
same time, improvements have been made in CCD, though not as
much.  We are, therefore, seeing CMOS sensors increase their
market share, and begin to appear in increasingly high end
applications.  My guess is that this trend will continue.

Mike: There is no intrinsic advantage to either CCD or CMOS technology
in terms of image quality — equivalent light-sensing elements can be
made with either technology.  It is my understanding that CMOS
sensors are easier to fabricate, because they are made on the same
high-volume fab lines as most other integrated circuits.  CMOS
technology also facilitates the inclusion of additional circuitry on
the sensor silicon, which can reduce overall component count in a
digital camera.  Both of these advantages are important for
lower-cost, more compact digital cameras.  However, there is no
PERFORMANCE advantage for an image sensor fabricated with CMOS vs. an
image sensor fabricated with “CCD” (usually NMOS) technology; in fact,
many “CCD” image sensors produce notably better image quality than CMOS
image sensors, for several reasons.  Also, nearly all CMOS fab
lines have significant limitations on the size of each integrated
circuit that can be produced, which will in turn limit the size and
performance of any CMOS image sensors, too.  For the highest image
quality, “CCD” image sensors will continue to be superior to CMOS image
sensors, especially for larger-format and scanning digital cameras.

Wayne: Sensitivity vs feature size – as resolutions increase, pel size
drops, unless the sensor gets larger. This has an impact on ‘ISO’
sensitivity. Are there any developments pending that are likely to
impact on this?

Ken: If a large pixel and a small pix
el are alike in every other way,
the large pixel will have more signal, because it collects more photons
and has a larger well in which to store the photon generated electrons.
Since both the large pixel and the small pixel have the same amount of
noise (we said they were the same in eve
ry other way), the large pixel
has better signal to noise, and therefore wins the ISO prize. 
This, in general, is the case. But if the small pixel can be made with
lower noise, then the small pixel may win the ISO prize even though is
has less signal, because it is signal-to-noise ratio that controls
ISO.  Because a small hunk of silicon is much cheaper than a big
hunk of silicon, and because a small hunk of silicon means smaller,
cheaper cameras, silicon designers are motivated to drive down the
noise.  Small pixels today are better than big pixels were a few
years ago.  Making lower noise little pixels is similar to making
lower noise big pixels, so it is likely that large pixels will continue
to have better ISO than little pixels, but little pixels will get good
enough for an increasing number of applications.

Mike: Not likely.  Present-day image sensors can achieve a quantum
efficiency (QE) of over 60%, which means that these sensors are already
converting over 60% of the photons that strike them into electrical
signals.  The maximum possible  QE is 100%, which would
represent less than one f-stop of improvement in sensitivity over
today’s sensors.  Shot noise, which is a fundamental component of
the signals from these image sensors, cannot be mitigated or avoided by
any technology — it’s one of the “laws of physics” — only larger,
more light-sensitive pixels can truly improve the sensitivity of a
digital camera.  There is a sensor technology that could
dramatically increase the sensitivity of scanning digital cameras (by
more than three f-stops), but it’s not clear whether there is enough
market interest in these large-format devices to merit the development
cost of such a sensor.

Wayne: How does digital sensor design impact on the optical design of a
camera’s lenses? Are ‘digital’ lenses really any better in practice
than ‘film’ ones when used in a digital camera?

Ken: Of course, sensor size affects the size of the lens, and pixel
size affects the resolution required of the lens. The resolution
limitations of a typical 35 mm lens can be clearly seen as the pixel
size falls from, say, 12 microns to 6 microns. For single shot Bayer
pattern color sensors, the resolution limitation is not all bad, as
some optical resolution limitation is desirable to reduce color
aliasing (Wayne – this is the same effect as incorporating an
anti-aliasing filter, which really just introduces a small degree of
blur).

 

There is another consideration as well.  The surface of a sensor
is not uniformly sensitive to light.  In between pixels, there is
often area that is not sensitive at all.  Within the sensitive
area of the pixel, there are sometime areas that are not as sensitive
as other areas.  While full frame CCD sensors (such as are used in
some high-end backs and cameras) are nearly 100% sensitive and
uniformly so, all CMOS sensors are not and most CCD sensors are
not.  Because of this, a tiny micro-lens is often stuck on top of
each pixel to focus the light falling on the insensitive area
into  the sensitive area, which is usually near the middle of the
pixel. These little micro-lenses work best if the light is coming at
them perpendicular to the focal plane.  So light coming parallel
to the optical axis (telecentric) is best.  This is why wide angle
lenses don’t work so well with many digital cameras.  

Lens designers are therefore designing telecentric lenses.  Since
telecentric lenses require more elements, they use more glass and are
therefore more expensive.  This again motivates sensor designers
to make smaller sensors so that the lenses can be smaller, use less
glass,  and thus be cheaper.

Mike: Some smaller digital image sensors use micro-lenses over each
pixel to direct more of the incoming light into the active area of each
pixel (which, in these cases, is smaller than the spacing between
pixels, so there is some “dead area” around each pixel).  These
sensors may benefit from a telecentric lens design, where the light
rays striking the image sensor are more-or-less parallel to each other
(and therefore perpendicular to the image sensor surface over its
entire area).  Professional digital cameras typically use image
sensors that do not have micro-lenses, and therefore do not require
telecentric optics.  Most commercially-available “digital lenses”
are designed for larger-format cameras (with interchangeable lenses),
and these “digital” lenses may deliver improved performance under
certain test conditions.  However, in most real-world 
applications, there is little or no difference between the so-called
“digital” large-format lenses, and their “non-digital” (film?)
counterparts.  Our large-format scan backs make excellent lens
testing devices, and we have evaluated a number of (large-format)
“digital” and “non-digital” lenses this way.  In our testing to
date, we have obtained the best overall results with a “non-digital”
lens.

Wayne: What do you see as the likely developments in camera sensor design over the next 1, 2 and 5 years?

Ken: More of the same: better, faster, cheaper. Smaller pixels getting
better, so more pixels can be crammed onto the same hunk of silicon.
One thing not likely to change soon is the spectral sensitivity of
humans, so I don’t think pixels that image visible light will get a
whole lot smaller than 3 microns (Wayne – most current sensor designs
go down to around 8 or 9 microns, so there is still room to get
smaller).

Mike: Perhaps Foveon will get their interesting new color technology
working reliably.  Many smaller image sensors are already at the
practical limit of (small) pixel size, so it is unlikely that even
smaller pixels will be developed.  CMOS sensors may cram more
electronics onto the same silicon, but this probably won’t improve the
sensor performance (image quality) significantly, if at all.

Wayne: Will the Foveon development take over the world?

Ken: That depends on how well it works.  My experience with 100%
sampled color images vs. Bayer pattern color images is that it takes
about two Bayer pixels to equal one 100% sampled pixel.  Thus, all
else being equal, silicon hunkage could be halved.  But I don’t
know yet how close all else is to being equal. I imagine there are
significant challenges, not the least of which might be
signal-to-noise.  The property of silicon that Foveon is
exploiting to separate color has been well known for nearly as long as
silicon sensors have been around.  If it was easy to do, it would
have been done before. If they pull it off, it will be a laudable
achievement.

Mike: That may depend upon your definition of “the world”… 
Foveon currently has no intention of producing a sensor large enough to
be of interest to most professional photographers, so this small but
important segment of “the world” probably won’t be affected. 
Foveon is making a lot of noise about “true color at every pixel”, but
scanning digital cameras have enjoyed this advantage since their
introduction in 1994, delivering better image quality than Foveon could
hope to produce.  Even when Foveon gets their technology working
reliably, there are many aspects of the consumer digital camera
marketplace that do not involve technology, a
nd these “market forces”
may have more influence on Foveon’s eventual success than their patent
portfolio or PR efforts.

Special quote of Ken’s: ‘Of one thing I am pretty certain: An
micro-acre of silicon takes a better picture than an micro-acre of
silver, and the silicon k
eeps getting better.’

We would like to thank Ken Boydston, President of MegaVision, Mike
Collette, President of Better Light, Inc. and Jay Kelbley, Worldwide
Product Manager of Digital Capture for Kodak for providing information
for this article.

Tags:

No comments yet.

Leave a Reply