What the New Canon Releases Mean, Part 1

Canon has just launched a huge number of new imaging products. What does it all mean?
Well, Canon have just done their usual half yearly product blitz. Now
that we have all read the press releases it is time to consider the
consequences.

One of those consequences is that Canon is continuing to lead the field
in digital SLR camera developments. They have consistently, in recent
years, effectively lead the field in the introduction of new features
and new price points. While everyone else is still releasing low end
6MP dSLRs, Canon do 8MPs. While everyone else is standardizing on small
sensors, Canon is pushing full frame.

Now you can quibble over whether 6 or 8MP really makes that much
difference, but in reality more pixels give you more options, all other
things being equal. More pixels let you do bigger prints, crop more,
etc. The one danger with more pixels is that, if the sensor size
doesn’t grow, there is the possibility of more noise. In comparisons I
have done, I see a very slightly increased noise level in the 8MP Rebel
XT/350D over the Rebel/300D. Slight and of no impact in real shooting,
but there. That is why the 5D is so important. Not only does it produce
a new price point for a 12.8MP camera from one of the main players, but
it also shows Canon’s ability to bring their full frame sensor
technology down from the top end cameras to the more affordable levels.
There was never any doubt that they could do this, since their move to
CMOS sensors, but it is a clear statement of intent to do so.

Sensors smaller than 35mm are great for sports photographers, who get
an effective boost in focal length for the same money out of their
lenses. But for the rest of us it causes problems at the wide angle
end. It also has the afore mentioned potential noise issue. However,
smaller sensors are cheaper to manufacture. So I expect to see Canon
gradually move to effectively two dSLR families: one based on the
smaller sensor size for the very cost sensitive end of the market that
allows the use of the smaller, cheaper ‘digital’ lenses, and a full
frame line for the serious amateur and pro markets. This is effectively
what they have now, but it will undoubtedly be developed. Price drift
will push both lines lower.

What of the other camera makers? Well, they’ll be playing catchup until
they either manage to get their act together and provide some real
competition, or until there are mergers or dropped product lines.
Pentax, Olympus and Nikon, in particular, are producing great cameras
in their own rights. All make some lovely cameras and they are very
effective. I’ve had an E-300 here for extended testing for some time
and this 8MP camera is a true joy to use. However, while Olympus SLRs
of old were a perfection of engineering and a joy in a small package,
Olympus’ current offerings enjoy no size advantage from their much
hailed small sensor, but the E-300 is a lovely camera. The ist D range
are good cameras. And Nikon naturally has lovely cameras, the D70 in
particular. But sadly none of these stand out as a class leader. And of
course there are the others: Fuji, Minolta, etc, all nice for certain
reasons but either over priced, under marketed or just not a standout.

What the industry could benefit from is one of the other players taking
a huge leap and thus provide some real competition. At the moment I
don’t consider it to be really there. The current state, as I see it,
is that is you already have Minolta or Pentax lenses, you’ll look at
them first. But a surprising number of such people are ditching their
lenses and switching systems. A few new to SLR people might trickle in,
but i think this is more likely to be to Pentax than to Minolta. If you
have Nikon lenses then you will consider Nikon, Fuji or Kodak. Canon
lens people are well looked after by Canon, but also Kodak, if they
want an alternative. Newcomers to SLRs will likely gravitate towards
Canon and Nikon (which is still benefiting from its historical quality
image), and probably Pentax and Olympus.

When I talk to the Canon executives they make no bones about wanting to
own the dSLR market. The market share figures are showing that they are
increasingly doing just that. We can only hope that one of the other
makers has been beavering away in the lab and has a quantum leap just
about to come out, as competition is good for everyone.

Unsuitable for Podcasts

Many MP3 players lack certain features that make then useable for listening to podcasts, seminars and other long material
The shift to MP3 players as replacements or additions to CD players,
either portable or in cars and homes, offers many real advantages.
However many of these devices are quite unsuitable when it comes to
listening to podcasts, seminar recordings, business and marketing
practice tapes and recordings of college/university lectures.

MP3 players evolved with a heavy focus on listening to music. In the
main original market, this meant collections to fairly short tracks. So
the controls evolved along these lines, easy controls to skip tracks
forwards or backwards, menues of tracks and playlists, remembering
which track you were on when the unit switches off, etc. For popular
music, this all works very well.

Podcasts, lecture recordings and motivational works or talking books,
on the other hand, are characteristed by a smaller number of longer
duration tracks. Lately I’ve been testing some MP3 players. Because I
often listen to podcasts or seminar recordings in the car, I’ve been
trying them out with such material, as well as my usual choice of
music: jazz, gregorian chants, celtic music and 13th Century French
secular music. Let’s consider two MP3 players I’ve had here for
sometime and that I have been experimenting with: the Samsung YEPP and
the XIRO 128MB player. Both will remember which track you are on but
forget just where you were in the track. This is really annoying when,
say, you drive somewhere listing to a one and a half hour lecture and
get half way through when you get to your destination. The unit
switches off when you have left it in pause for some time and forgets
where you are. Then to make this worse, the fast forward and back
controls only advance at a pretty slow rate, making it very time
consuming to advance quickly forwards. The iRiver H-10 I also have does
remember where you are and has a similar very slow fast forward.
Apple’s iPod allows you to scrub through a recording using the control
wheel, but is also not as quick at fast forward as I would like.

So, to me, there need to be a few things added to new MP3 players to
get the best possible use out of them. Firstly they all need to
remember where you are in a track when they turn off. Secondly they
need a logarithmic fast forward and reverse control. This could either
offer varying speeds of advance as you move the control further to
start off slow and accelerate as you hold the control or spin the wheel
for longer. Thirdly, for the larger units, they should offer high
quality recording as a built-in option that doesn’t require anything
else and have a microphone jack for the use of external microphones.
This would help podcasters, students and many others.

There’s the challenge. Let’s see who is the first manufacturer to do it properly.

Time for a Shake-up in Patent Law

With the European rejection of changes to their patent law to cover software patents, perhaps it is time for a re-think worldwide
The massive vote against a proposed
change to patent law in Europe to create a single approach to patenting
software means that perhaps all countries should reconsider their
patent law on software, at the very least. The European Parliament rejected the law 648 to 14, with 18 abstentions.

Patenting of software in the US has reached absurd levels. The US
patent office has basically dropped all assessment, or so it appears,
and is granting patents unless someone objects. This forces much more
litigation than should be happening and this is very bad for innovation
from small companies who can not afford to defend themselves against an
attack from a large company with deep pockets and retained lawyers who
need to justify their payments. US companies have been patenting all
sorts of things that have had substantial prior use on the basis that
if they get the patent it is up to other people to challenge the patent
in court. They hope it may just appear easier to pay a nominal
licensing fee. Indeed the true decadence of the present system is
obvious with a number of US companies effectively stopping all real
trading and simply generating revenue off of patents and through
litigation.

It is my view that the US government is allowing the interests of big
business to drive intellectual property issues, and that this will
ultimately bite the American technology sector on the arse as the
centers of innovation move elsewhere. Patenting of software is one
example, as are the patenting practices in bio-technology and the
ongoing extension of copyright to protect Disney.

Could the rise of patent protection actually be stifling innovation? An
article in NewScientist magazine on the 2nd of July, 2005, entitled
“Are we on our way back to the Dark Ages?, by Robert Adler, discusses
the work of Jonathon Huebner that the rate of real innovation per billion of world population peaked
in 1873 and has been sharply declining ever since.Whilst this work is
highly disputed it does raise some interesting ideas about what
constitutes real innovation as opposed to mere refinement.

Perhaps it is time the whole world had a good look at intellectual
property laws and re-evaluated the types, relevance and appropriate
protections granted under the various forms of intellectual property
rights. And also the obligations.

So what is Digital Fine Art?

A thoughtful article on what is digital fine art.
In the interests of stimulating some debate, I propose to attempt to
answer this question and then encourage you to email us your
reactions/feelings/ideas for publication.

I would define digital fine art as any art in which computer or digital
technology has been used in some part of the artistic process. This is
a very broad definition but a good one I think. One could be very
limiting and say that digital fine art is only that which is entirely
‘made’ using digital processes. Whilst this is also a valid definition
it is too narrow for our purposes here. Whilst I can foresee a point
where the word digital can be dropped and we can simply concentrate on
the art, not the technology of the process, we are not their yet. We
are at a point in time where some individuals have seen the potential
of digital technology in the artistic process but the vast majority
have not, or consider it too ‘easy’ to really be art.

Digital technology can be applied to the whole artistic process or to
only part of it. An example of the latter is the following. My wife is
a traditional decorative artist who tends to leave the digital side to
me. She was commissioned to do a large, quite complex painting that had
important issues of perspective and scale to resolve. Rather than doing
this in her more usual trial and error method I convinced her to
prototype the painting on the computer. To do this we scanned various
elements from books of photographs that were close to what she wanted.
We then played around with these, changing their size and perspective
until we had a mock-up of the painting that worked compositionally and
impact wise. We than printed this off as a reference and off she went
to paint in her usual acrylics. Here the end result is totally
non-digital yet an important roll was performed in the digital domain.
The same could have been achieved with pen and paper but the ease with
which we could reposition things and experiment greatly facilitated the
process and improved the end result.

Digital technology can be applied to any of these areas and processes of art:

o    Photography;

o    Painting and drawing;

o    Printing of the painting, drawing or photograph;

o    The physical painting of the artwork;

o    Planning;

o    Design of sculptures;

o    Production of sculptures;

o    Motion picture, video and animation work;

o    Lighting and sound for performance art.

The most important thing to remember is that you can incorporate as
much or as little digital technology into your art process depending on
what you feel comfortable with, what works with your vision and what
you can afford.

Further, we can divide digital fine art into a number of categories. As a working basis we can divide it into the following:

o    Algorithmic or mathematical art;

o    Digital replacement for natural media;

o    Photo-manipulation;

o    Digital Synthesis.

Algorithmic or mathematical art includes a number of areas of digital
art. We have all seen Fractal art, which was very popular for a while.
Other types of algorithmic art include initiatives in artificial
intelligence to allow computers to ‘paint’ and various programming
approaches to turning images into paintings (some of the Photoshop
filters, for instance). Some other authors include 3D art in this
category, arguing that the production of models of objects, the
positioning of lighting and the camera and then allowing the computer
to ‘render’ the scene by mathematically calculating the light/optical
effects puts it in the algorithmic category. I actually disagree with
them and, as we shall see, put 3D in a different category.

Digital replacement of natural media basically uses the computer to
emulate various ‘conventional’ artistic processes and materials. The
results of such work are frequently indistinguishable from ‘the real
thing’. The actual process of creating the work is effectively similar,
except that rather than brushes, a canvas and paints, we use a graphics
tablet, monitor and software. Digital natural media offer advantages
and disadvantages over the real thing. It can be far quicker to work,
easier to mix otherwise incompatible media, like oils and watercolours,
and the ready ability to correct mistakes leads to a bolder
experimental style. Also since the output is actually significantly
independent of the creation process, it is possible to later choose
things like the size of the work and the media it is printed on. The
disadvantages are that it is not real paint, the tactile sensations are
not there and the reality is that some techniques that are so natural
when working with natural media don’t translate well into the computer
(at least not yet).

Photo-manipulation is perhaps the most prevalent form of computer art.
In many ways this is also digital natural media because most of the
things that you can do to photographs in the computer could be done in
the darkroom, just usually more slowly and far more difficultly. The
most heavily commercial of the digital art areas, along with 3D, it
offers the challenge for surpassing the ‘play’ to be truly fine art.

Digital synthesis or what I would rather call ‘Holistic or Integrative
Art’ uses any and all techniques, including conventional ones, to get
the result you want. In this form, quite akin to ‘mixed media’ the end
result is what matters, not the ‘racial purity’ of the techniques used
to create it. My own preference for this type of art (and digital art
specifically) is that it puts the focus where it should be, on the
quality of the art, its message and how well it communicates it. Whilst
the media used has some relevance to a collector or gallery when
considering issues of permanence and suitable display and storage
conditions, it has, I believe, been too long used as a form of
selective snobbery.

In reality there are very few digital artists who work purely in one
mode. For example, when I was heavily involved in algorithmic art, I
would still commonly use photo-manipulation techniques on the resulting
images to finetune colour, contrast and composition. I would then often
use conventional techniques to arrange multiple images into one
‘piece’. This is also why I don’t put 3D art in the algorithmic
category alone. As an artist that works in this area I actually feel it
combines many techniques. It mixes sculpture, set design, industrial
design, photographic lighting and photography with digital natural
media painting, and mathematical art. I have also never seen 3D art
done well without post-production photo-manipulation. Thus it combines
so many forms of conventional art with all the forms of digital art.



So Where’s The Beauty?

A good friend of mine, Steve Danzig, and I were having a long ICQ chat
the other day and ended up discussing beauty in digital art. It had
occurred to me that when you look at a wide cross-section of digital
art it not only divides to into looks: beautiful and optimistic; and
dark, depressing and im
ages to cut your wrists by. Interestingly most
of the beautiful and optimistic appearing digital imagery that is not
kitsch lies in the mathematical art domain, whilst the dark imagery is
mostly photo manipulation and 3D. Is this a real perception? Does it
reflect the personality types of the people drawn to these differ
ent
approaches? Email in your thoughts.

The old examples of my algorithmic art show one type of such work.

Issues in Digital Camera Design

In this interview we talk to several people with a deep knowledge of digital camera design. While the interview is now two years old, the information in it is still highly relevant.
As professional photography moves from being predominantly film based
to predominantly digital, many of us are struggling to come to terms
with the key issues of digital sensors. For those of us raised on film
densitometry curves and Ansel Adams’ ‘The Negative’ there is a whole
new world out there and many of the parameters we took for granted have
changed radically.

When you move from an emulsion to a silicon chip sensor, whether CCD or
CMOS, things change. Major issues with film, like reciprocity failure,
do not exist with digital.

CCD sensors come in two major design types, Full frame and Interline.
Full frame CCDs attempt to use as much of the chip surface as possible
for the pixels (more correctly called pels, or picture elements).
Interline CCDs don’t use as much area for sensing because they use some
of the surface area to rapidly transport the picture data from the
CCDs. Thus, Interline CCDs are great for speed imaging situations, like
video or high frame rate sports photography, whilst Full frame CCDs
have greater sensitivity. Full frame CCDs, as used in most professional
digitals, offer the largest inherent imaging area as a proportion of
the chip size, thus they have the best sensitivity and lowest noise.
Certain CCD designs use a process technology called ITO, or Indium Tin
Oxide, which adds about a stop and a half of extra light sensitivity,
but around two stops in the blue channel, where CCDs are least
sensitive. ITO is used on the Kodak CCDs in their pro backs. The
general lower sensitivity of silicon sensors in the blue usually
manifests as greater noise in that channel, because the signal (and its
noise), have to be boosted more to maintain colour balance. This is the
reason many digital camera images benefit from having a Gaussian Blur
applied to the Blue channel only.

CMOS sensors, as used in the Canon D60 and the Foveon design, has less
of the chip area devoted to the light sensitive components, hence their
sensitivity is inherently lower. Because of this, CMOS pixels have a
lens built into the chip surface. This lens can cause reduced light
intensity around the edge of the chip with normal lens designs. It
manifests as greater noise around the outer edge of the chip. However,
CMOS offers other major advantages, like lower power consumption, lower
cost and greater ease of integrating other camera functionality onto
the chip.

Digital image sensors, whether CCD or CMOS, are prone to what is called
thermal noise. As the temperature of the chip increases the pixels see
more ‘spurious’ light which shows up as noise. For the technically
inclined, when light hits the sensor it releases electrons, which are
collected in the pixel well. Heat also releases electrons into the
pixel well. Since one electron is the same as another, there is no way
to tell the difference between these ‘thermal’ electrons and ‘light’
ones. This noise is swamped in short exposures with plenty of light. As
I proved for myself in tests, digital cameras can produce more visible
noise the longer they have been switched on, and thus the hotter the
circuitry is. Also the hotter the ambient temperature is the more noise
too. So, if you are shooting longish exposures in the outback in
summer, keep your camera in a cooler between shots. Whilst I don’t
believe any 35mm-based digitals do it, some medium format digital
backs, like the Kodak ones, also capture a dark frame in exposures
longer than 1/4 second. A dark frame is a shot of the same length as
the imaging exposure but with the CCD covered, so that the only ‘light’
it sees is the spurious ‘dark current’ noise. On other cameras you can
naturally do this yourself using a lens cap and Photoshop. Some digital
backs for medium and large format cameras use active cooling to keep
the CCD cool, and thus reduce noise. This adds bulk and significantly
increases power drain, but works excellently. CCD cooling was developed
by the astrophotography guys to remove noise from their very long, i.e.
one hour, exposures. We used to do this with film too, but there to
improve the reciprocity characteristics.

One thing that is starting to become an issue with digital capture is
inherent sharpness. Some of the higher resolution sensors have pixel
sizes around that of the circle of confusion of common lenses. Those of
you who remember your optics (you all do don’t you?) will recall that
the circle of confusion defines the resolving power of the lens. Many
photographers working with cameras at this leading edge, such as David
Meldrum, report that they get much sharper images with certain lenses
than others. This will be an increasing issue as more cameras use such
sensors and as the resolution of sensors continue to rise. Then just as
with the finest resolution films, you will need to be very choosy about
which lenses you use to get the sharpest result.

So maybe film and digital are not so different after all.

To get an additional perspective, we interviewed Kenneth Boydston,
President of MegaVision, and Mike Collette, founder and president of
Better Light, Inc.

 

Wayne: CCD vs. CMOS – Are there any fundamental issues between these
that make you see one as superior to the other? Why? Now or in terms of
future development potential?

Ken: As an image sensor, nearly everything about CMOS is better than
CCD except one very big thing: Signal-to-noise ratio.  For an
equivalent signal, CMOS has always been noisier, which has limited its
use to lower end applications.    Because of the
numerous advantages of CMOS, silicon designers are motivated to drive
down the noise, and over the last few years have done so.  At the
same time, improvements have been made in CCD, though not as
much.  We are, therefore, seeing CMOS sensors increase their
market share, and begin to appear in increasingly high end
applications.  My guess is that this trend will continue.

Mike: There is no intrinsic advantage to either CCD or CMOS technology
in terms of image quality — equivalent light-sensing elements can be
made with either technology.  It is my understanding that CMOS
sensors are easier to fabricate, because they are made on the same
high-volume fab lines as most other integrated circuits.  CMOS
technology also facilitates the inclusion of additional circuitry on
the sensor silicon, which can reduce overall component count in a
digital camera.  Both of these advantages are important for
lower-cost, more compact digital cameras.  However, there is no
PERFORMANCE advantage for an image sensor fabricated with CMOS vs. an
image sensor fabricated with “CCD” (usually NMOS) technology; in fact,
many “CCD” image sensors produce notably better image quality than CMOS
image sensors, for several reasons.  Also, nearly all CMOS fab
lines have significant limitations on the size of each integrated
circuit that can be produced, which will in turn limit the size and
performance of any CMOS image sensors, too.  For the highest image
quality, “CCD” image sensors will continue to be superior to CMOS image
sensors, especially for larger-format and scanning digital cameras.

Wayne: Sensitivity vs feature size – as resolutions increase, pel size
drops, unless the sensor gets larger. This has an impact on ‘ISO’
sensitivity. Are there any developments pending that are likely to
impact on this?

Ken: If a large pixel and a small pix
el are alike in every other way,
the large pixel will have more signal, because it collects more photons
and has a larger well in which to store the photon generated electrons.
Since both the large pixel and the small pixel have the same amount of
noise (we said they were the same in eve
ry other way), the large pixel
has better signal to noise, and therefore wins the ISO prize. 
This, in general, is the case. But if the small pixel can be made with
lower noise, then the small pixel may win the ISO prize even though is
has less signal, because it is signal-to-noise ratio that controls
ISO.  Because a small hunk of silicon is much cheaper than a big
hunk of silicon, and because a small hunk of silicon means smaller,
cheaper cameras, silicon designers are motivated to drive down the
noise.  Small pixels today are better than big pixels were a few
years ago.  Making lower noise little pixels is similar to making
lower noise big pixels, so it is likely that large pixels will continue
to have better ISO than little pixels, but little pixels will get good
enough for an increasing number of applications.

Mike: Not likely.  Present-day image sensors can achieve a quantum
efficiency (QE) of over 60%, which means that these sensors are already
converting over 60% of the photons that strike them into electrical
signals.  The maximum possible  QE is 100%, which would
represent less than one f-stop of improvement in sensitivity over
today’s sensors.  Shot noise, which is a fundamental component of
the signals from these image sensors, cannot be mitigated or avoided by
any technology — it’s one of the “laws of physics” — only larger,
more light-sensitive pixels can truly improve the sensitivity of a
digital camera.  There is a sensor technology that could
dramatically increase the sensitivity of scanning digital cameras (by
more than three f-stops), but it’s not clear whether there is enough
market interest in these large-format devices to merit the development
cost of such a sensor.

Wayne: How does digital sensor design impact on the optical design of a
camera’s lenses? Are ‘digital’ lenses really any better in practice
than ‘film’ ones when used in a digital camera?

Ken: Of course, sensor size affects the size of the lens, and pixel
size affects the resolution required of the lens. The resolution
limitations of a typical 35 mm lens can be clearly seen as the pixel
size falls from, say, 12 microns to 6 microns. For single shot Bayer
pattern color sensors, the resolution limitation is not all bad, as
some optical resolution limitation is desirable to reduce color
aliasing (Wayne – this is the same effect as incorporating an
anti-aliasing filter, which really just introduces a small degree of
blur).

 

There is another consideration as well.  The surface of a sensor
is not uniformly sensitive to light.  In between pixels, there is
often area that is not sensitive at all.  Within the sensitive
area of the pixel, there are sometime areas that are not as sensitive
as other areas.  While full frame CCD sensors (such as are used in
some high-end backs and cameras) are nearly 100% sensitive and
uniformly so, all CMOS sensors are not and most CCD sensors are
not.  Because of this, a tiny micro-lens is often stuck on top of
each pixel to focus the light falling on the insensitive area
into  the sensitive area, which is usually near the middle of the
pixel. These little micro-lenses work best if the light is coming at
them perpendicular to the focal plane.  So light coming parallel
to the optical axis (telecentric) is best.  This is why wide angle
lenses don’t work so well with many digital cameras.  

Lens designers are therefore designing telecentric lenses.  Since
telecentric lenses require more elements, they use more glass and are
therefore more expensive.  This again motivates sensor designers
to make smaller sensors so that the lenses can be smaller, use less
glass,  and thus be cheaper.

Mike: Some smaller digital image sensors use micro-lenses over each
pixel to direct more of the incoming light into the active area of each
pixel (which, in these cases, is smaller than the spacing between
pixels, so there is some “dead area” around each pixel).  These
sensors may benefit from a telecentric lens design, where the light
rays striking the image sensor are more-or-less parallel to each other
(and therefore perpendicular to the image sensor surface over its
entire area).  Professional digital cameras typically use image
sensors that do not have micro-lenses, and therefore do not require
telecentric optics.  Most commercially-available “digital lenses”
are designed for larger-format cameras (with interchangeable lenses),
and these “digital” lenses may deliver improved performance under
certain test conditions.  However, in most real-world 
applications, there is little or no difference between the so-called
“digital” large-format lenses, and their “non-digital” (film?)
counterparts.  Our large-format scan backs make excellent lens
testing devices, and we have evaluated a number of (large-format)
“digital” and “non-digital” lenses this way.  In our testing to
date, we have obtained the best overall results with a “non-digital”
lens.

Wayne: What do you see as the likely developments in camera sensor design over the next 1, 2 and 5 years?

Ken: More of the same: better, faster, cheaper. Smaller pixels getting
better, so more pixels can be crammed onto the same hunk of silicon.
One thing not likely to change soon is the spectral sensitivity of
humans, so I don’t think pixels that image visible light will get a
whole lot smaller than 3 microns (Wayne – most current sensor designs
go down to around 8 or 9 microns, so there is still room to get
smaller).

Mike: Perhaps Foveon will get their interesting new color technology
working reliably.  Many smaller image sensors are already at the
practical limit of (small) pixel size, so it is unlikely that even
smaller pixels will be developed.  CMOS sensors may cram more
electronics onto the same silicon, but this probably won’t improve the
sensor performance (image quality) significantly, if at all.

Wayne: Will the Foveon development take over the world?

Ken: That depends on how well it works.  My experience with 100%
sampled color images vs. Bayer pattern color images is that it takes
about two Bayer pixels to equal one 100% sampled pixel.  Thus, all
else being equal, silicon hunkage could be halved.  But I don’t
know yet how close all else is to being equal. I imagine there are
significant challenges, not the least of which might be
signal-to-noise.  The property of silicon that Foveon is
exploiting to separate color has been well known for nearly as long as
silicon sensors have been around.  If it was easy to do, it would
have been done before. If they pull it off, it will be a laudable
achievement.

Mike: That may depend upon your definition of “the world”… 
Foveon currently has no intention of producing a sensor large enough to
be of interest to most professional photographers, so this small but
important segment of “the world” probably won’t be affected. 
Foveon is making a lot of noise about “true color at every pixel”, but
scanning digital cameras have enjoyed this advantage since their
introduction in 1994, delivering better image quality than Foveon could
hope to produce.  Even when Foveon gets their technology working
reliably, there are many aspects of the consumer digital camera
marketplace that do not involve technology, a
nd these “market forces”
may have more influence on Foveon’s eventual success than their patent
portfolio or PR efforts.

Special quote of Ken’s: ‘Of one thing I am pretty certain: An
micro-acre of silicon takes a better picture than an micro-acre of
silver, and the silicon k
eeps getting better.’

We would like to thank Ken Boydston, President of MegaVision, Mike
Collette, President of Better Light, Inc. and Jay Kelbley, Worldwide
Product Manager of Digital Capture for Kodak for providing information
for this article.