Saturday, November 30, 2013

Tools of the Trade ~ on Making Big Prints

I recently visited the Salon de la Photo and happened to wander by the Canon floorspace.  They had a huge presence at the show, and they hung very large prints.  The images I looked at were made using the 18 mpixel Canon 7D.  I was impressed.  The prints were at least 20 x 30 inches in size.  They remained wonderfully sharp, even on close inspection.  I think anyone believing they need a 36 megapixel sensor to give you a nice, sharp, huge print would have been impressed, even if they had no idea what camera was used to make the prints.

Original image (downsized to 1024 pixels), straight
out of the camera that used "Standard" processing
and in-camera actions.  The photo was made at sunset
on the middle of les Deux Pont next to l'isle St Louis.
The camera was hand-held and the kit len's OSS was enabled.

The experience made me think about an article titled "Big Sticks" that I read over on the Online Photographer blog some years back.  It's a great read and I liked the many points that were being made.  The comment that really grabbed my attention was,

"... a reader named Stephen Scharf not long ago objected to some things I said about the size of prints you can make from various size sensors. He claimed that he could make an excellent 13 x 19" (Super B/A3) print from 4-megapixel image files..."

M. Scharf used, at the time of the article, a Canon 1D.  It has a 4mpixel sensor.  By current standards, that's rather small.  Mike Johnson, the Online Photographer's editor, says "...As proof of concept, he sent me a print..."  M. Johnson was impressed, to say the least.  The print was sharp and beautiful.

M. Scharf shared his process in the article.  This got me to thinking.  So I took a look at what I could do along similar lines using different tools.

Processed original sized image, including
the first pass at Luminocity Sharpen-ing
(downsized to 1024 pixels for this blog entry)

I wanted to test the full sequence to see if I could understand and, perhaps, match M. Scharf's processing path.  If successful, I could put yet another nail in the product marketing coffin filled with half truths and outright lies.

I use the Gimp for the bulk of my image processing.  Taking a close look at M. Scharf's process, I tried to find equivalent Free Open Source Software (FOSS) equivalents to the image sharpening tools he used.  After watching how various FOSS sharpening methods impact one of my images, I settled on a script found in FX Foundary's toolkit.

A simple "unsharp mask" produced much too much noise in the smooth areas for my taste.  Other sharpening methods gave various results, but I still saw too much noise in the smooth regions.  It was after going through nearly every method available to Gimp users that I found "Luminocity Sharpen".  It's under FX-Foundary -> Photo -> Sharpen -> Luminocity Sharpen

For my test, I left the Luminocity Sharpen parameters UnSharp Mask (USM) and Find Edge Amount as default.  More recently, I've found I prefer setting the Find Edge Amount to 1.0,while leaving the USM defaults as is (0.5 in both cases).  The difference is subtle, so you would need to test to see what you like best.

Here is the test process for the images you see here:

  1. Process image in the Gimp to the point I'm happy with it
  2. Luminocity Sharpen with Find Edge Amount set to 2.0, and the USMs set to 0.5 in both cases
  3. Up-rez the file where Interpolation is set to Cubic" from Image -> Scale Image
  4. Luminocity Sharpen a second time with the same setting, Find Edge Amount to 2.0, and USMs to 0.5 in both cases
Original, processed, first Luminocity Sharpen-ed,
up-rez'd to 8000 pixels, second Luminocity
Sharpen-ing ~ This is a MASSIVE file!
(downsized to 1024 pixels for this blog entry) 

The results are enlightening.  Indeed, if I start with a low noise base image, I can up-rez a 4592 pixel in the long dimension image file to 8000 pixels in the long dimension and retain apparent resolution.  I say "apparent" because no information is being added.  It is only contrast that is carefully being added to light/dark transition areas.

For this reason, you can see that the 8000 pixel in the long dimension image has slightly more contrast than the original processed image Luminocity Sharpened just once.

The reason I'm settling on FX Foundary's Luminocity Sharpen script is that it touches only the light/dark transition areas.  The smooth tone areas are left clean and beautiful.  There is no apparent  added noise in the smooth tone regions.

Using the print size calculation I provided in an earlier blog entry, you can see we can take a Sony NEX5 (original) 14mpixel image size and enlarge it to over 30 inches in the long dimension, while retaining apparent resolution.

Obviously, this approach breaks down at some point.  For this reason it is worth the time it takes to test these kinds of processing approaches to see for yourself how far you can satisfactorily push things.  You might find that sensor size simply does not matter for the kinds of images you create.

There are two things illustrated here (you might want to click on the image and make sure you're looking at it full size).  

First, the top row shows what happens when you take the raw original file, the processed original sized file, an up-rez'ed to 6000 pixels file, and the massive 8000 pixel monster file and view them at the same dimension of 8000 pixels.  You will easily see the "pixelation" that takes place up through the 6000 pixel image. 

Second, you can see the bottom row as each file is viewed at it's native size at 100 percent enlargement.  You can easily see the effect of Luminocity Sharpen-ing on the three processed files.  The contrast transitions between the light and dark areas are increased.  The original, straight out of the camera in-camera processed image is "soft" compared to the other image samples.  The simple first step Luminocity Sharpen-ing looks pretty nice and "cleans up" the light/dark transition areas.  Now look carefully at the massive 8000 pixel monster file cropped section and compare it to the other files.  While no information is being added, the up-rez'ed image looks pretty darned fine, doesn't it?

Tuesday, November 12, 2013

Tools of the Trade ~ On Considering an Important Truth

Assume, for a moment, that photographic tools are really no different than tools used by other artists.

Pencil, pen, brush, ink, paint, chisel, forge, and hammer are all tools of art.  When viewing a finished work, how the work was created is, many times, less important than how a viewer responds to a work.

Assume, for a moment, that the goal of photography is to make images that express how you feel and how you "see" the world. 

In this way, cameras, lenses, printers, and paper are simply tools of photographic art.  Carefully consider how you look at a photograph and see if you can tease apart the marketing hype and camera equipment forum driven relationship between how the image was made from how you respond to it.

~ Having a camera is many times better than not having a camera ~ 

For making truly great photos, it simply does not matter what you use. 

The properties of one camera over another are largely unimportant.  Cameras simply enable image creation.  As we have seen, the current crop of imaging sensors are more than sharp enough for just about any subject in just about any situation.  What matters is how you "see" and how you use the tools of photographic expression available to you.

On a practical level, any sensor of 4 megapixels or greater are capable to delivering critically sharp prints up to 13x19inches and well beyond.  I will write more about printing in the next blog entry on Tools of the Trade.  I hope to illustrate that, in making beautifully expressive prints or publishing to the web, sensor size simply does not matter.

There is a interesting exception to my statement that having a camera is many times better than not having a camera.  There is a large field of photographic art that is, in the traditional sense, camera-less.  Commonly available and shockingly inexpensive flat bed scanners are the solution I'm considering here.  It is easy to find a perfectly usable high resolution flat bed scanner for 10USD/10Euro or less.

If you are curious about this photographic solution to image making and aren't already aware, check out Flickr's "Interestingness" selection of scannerart.  There are some wonderful ideas to be explored using this approach.

~ Having a lens is many times better than not having a lens ~

As we have seen, optical resolution out-performs currently available imaging sensors.  This holds true with an aperture setting anywhere from wide open down through f/11.

My claim that sensors are the limiting factor in photographic resolution, while seemingly heretical, is easily backed.  A blogger recently compared a Sony 50mm f1.8 against the much vaunted Leica Summicron 50mm f2. The author mis-understands the results by claiming equivalent optical quality between the two lenses.  From what we learn from my preceding blog entries, you can see what the role of the sensor really is.  In any event, results like these must drive Leica users crazy.  If they are interested in the finest image quality, their pricey equipment is really no better than, say, Sony's gear that's available at a fraction of the cost.

We have also seen where chromatic aberrations (CA) can effect resolution near the edge of the frame.  I talked about how to control the effects of CA in reading test results to learn which aperture settings return the lowest CA.

We have learned how to read modulation transfer function (MTF) charts.  Hopefully you can now see how contrast delivered by a lens to a sensor is different from all the other optical properties you might encounter. Field curvature and field spatial distortions could also be considered, but these details are not readily available in MTF chart information.

Yet, with all of this detailed knowledge about lenses and their properties, the single most important factor in image resolution remains the sensor.  Further, optical performance effects can be easily controlled in post-production.  Contrast, CA, and field spatial distortions can all be "processed" out of or corrected for by software you likely already own running on your computer and are many times corrected for in-camera.  In short, base optical performance need not be considered when choosing the best tools for your intended situations.

There two interesting exceptions to my statement that having a lens is many time better than not having a lens.  There is a fascinating field of lens-less solutions that date back hundreds of thousands of years and were more recently used by medieval artists.  Solar eclipses have been safely visible for as long as there have been trees and beings to witness the event as images projected on the ground.  Much more recently, Canaletto was only one in a long line of artists who used a "camera" (the word means "room" in Italian) to project an image onto canvas from which he would paint.

In current photography, we have at our disposal two interesting lens-less solutions.  They are the pinhole and the zone plate.  If you like the style and approach of these solutions, you could altogether avoid the costs of a glass optic.  For inspiration, here is Flickr's "interestingness" images for pinhole and zone plate work.

Which might lead a reader to wonder: 
If cameras, lenses, product marketing, and on-going internet forum flame wars are not important in photographic image making, why did I spend four long blog posts and well over a decade of my life considering the minutia of photography equipment?

One answer is that I was trained and worked in software and electronics engineering.  Taking a rational view of the craft and art of photography comes naturally to me.  I have an innate curiosity about things and the way they work.

Another answer is that I felt pushed and pulled by the marketing hype and on-line discussion forums.  It seemed all to easy to be misled and to stumble on irrational explanations of things that simply were not provable.  When I say irrational, I mean it in the sense of being not rational, and in the sense of being emotional and not scientifically thought through.  So much of what passes for discussion about photography gear is nothing more than wishful thinking and unsupportable claims.

I wanted to get to the truth of the matter and that the truth I have come to understood is rationally justifiable.  Once the truth is known, I could then turn my time and energies toward other interesting things.  The truth of things allows me to safely ignore the yammering babbling masses and marketeers while concentrating on making the best images I possibly can.

If, on the other hand, it's easier to see the practical application of my conclusions, what better way than to share the work of someone who is increasingly internationally known, celebrated, and heaped upon with well deserved accolades?  While it will be easy to sort out what the photographer uses, try to postpone that search long enough to look at his results.  Perhaps you will see for yourself how effectively used photographic equipment quickly transcends marketing hype and on-line forum equipment flame wars.

As Bill Gekas recently wrote on Facebook, "Revisiting some photography groups and forums the other day made me a little sad that some things just don't change and probably never will with some people. All gear no idea!!!"

Monday, November 11, 2013

Tools of the Trade ~ Resolution and the Real World

Let's have a serious look at lenses and the ever-popular topics of resolution and IQ, shall we?

After reading my prior two posts (one and two) that set the stage for this series on Tools of the Trade, a reader should be able to easily follow details in this post.  I am about to make a potentially bold series of statements, and then will back them up with what I know from years of my own camera system testing.

1. Camera sensors currently limit image resolution.  Lenses do not. 

I know this is true from my own testing of optical resolution.  I looked at large and medium format lens performance on film, and, more recently, 35mm lens performance on digital sensors.  It took me over a decade of looking at this to understand what the results were clearly showing me from day one.  My understanding of optics, resolution, and true limiting factors to resolution were later confirmed by an optical physics professor who performs research at a university in the US.

It takes a terrible lens to see degradation in image resolution.  Are poor resolution lenses available?  It seem that there are not many, and those that exist tend to be priced accordingly.  Except if it has Zeiss or Leica label it.  Then you see the poor performers termed "quaint" or "having a certain look."  At the other end of cost, inexpensive kit lenses have had enough pressure from "pixel peepers" to force manufacturers to improve those optics (simply look at the number of 18-55mm kit lenses Canon has offered over the past decade).

Readers need to approach comments such as "...It is crop, not FF, that requires sharper lenses, since for photos displayed at the same size...", or any combination of "...this new big sensor requires sharper lenses..." with extreme caution.  The only factor of interest in terms of physics and resolution are the number of line pair per millimeter the sensor resolves.  The present limit of sensor resolution of any APS-C, micro 4/3rd's, Full Frame, Medium Format sensor remains less than 120 line pair per millimeter.

An optical physic effect called diffraction limit effects optical resolution only at very small apertures.  If your sensor resolves around 123 line pair per millimeter (center diffraction limit of any optic at f/11), you may begin to see resolution degradation start at f/16 and continue through to the end of your aperture range (to f/32 and beyond).  This leaves a very long aperture range available to you.  From wide open down through f/11, all these apertures will be available to you, and in terms of resolution, will out-perform your sensor.  This physics effect won't be seen on lower resolution sensors (including most APS-C and all Full Frame sensors in current production) until an optic is stopped down to f/16 and beyond.

Note: The obvious exceptions are "soft focus" optics that deliberately smudge a scene.  Nothing in over a hundred and fifty years of photography has changed.  

2. Modulation Transfer Function (MTF) charts do not tell us how sharp a lens is.

MTF charts only tell us the amount of scene contrast a lens is capable of passing along to the sensor at various low resolutions.  Look closely at any MTF chart and you will see lines that show the contrast given at, for instance, 10 line pair per millimeter resolution and another set of lines that show contrast retention at, for instance, 40 line pair per millimeter.  Given the physics of optics these are rather low resolution settings (scroll down to see the diffraction limit chart).

If MTF testing is not a measure of resolution, why then do lens manufacturers publish MTF charts?  It's because the human eye perceives resolution in most cases as contrast.  In practical terms, digital sensors will be able to capture a quick transition from black to white as long as a lens to provide it.  It's as simple as that.

When reading comments across the 'net, statements such as " you can see from the above MTF charts, now that you know how to read them, the difference that are seen can be easily be quantified..." need to be approached with extreme caution.  There is nothing in a MTF chart which correlates in any meaningful, direct way to other optical properties what you may find important.  This includes sensor resolution, field flatness, lens distortions, or chromatic aberrations.

Again, the only thing MTF is attempting to show is a lenses ability to pass contrast to the sensor.  And that, only on a flat two dimensional plane.  This last sentence has importance when we talk about field flatness.

3. Chromatic Aberrations (CA) can be measured and provide useful information about how a lens can perform at the edges of a scene at different apertures.

Many currently published lens tests measure a lenses CA.  It's worth the time it takes to review lens tests in this area as there is a real world and meaningful correlation between test results and real world camera system performance.

Let's take a look at three lenses: Canon 50mm f/1.4 USM, Zeiss 50mm f/1.4 Planar T* ZF, and Leica 50mm f/1.4 Summilux R.

What do we see?  Canon's CA, as measured at the edge of the image frame, is substantially less than one pixel width from f/1.4 all the way through f/11.  The Zeiss' CA is at least one pixel width, and varies according to aperture.  The Leica's CA also crosses over 1 pixel width at all apertures.

In the real world when using a Zeiss or Leica 50mm lens, a single pixel at the edge of a light to dark transition at the edge of the image frame may show purple or blue/green "fringing".  Is it enough to worry about?  That depends on your "pixel peeping" experiences.  A lenses inability to bring together the visible color spectrum to a common point may not be visible in a very large print.  You would need to decide.

Let's say you decide that a pixel's width of CA is important enough to you to avoid.  Taking this position, the currently priced 300USD new auto-focus Canon 50mm easily out-performs both the manual focus new 725USD Zeiss ZF and a used eBay'd 1100USD to 1600USD Leica Summulix R.  So understanding the level of CA a lens exhibits might be important in evaluating it's "performance" (using the subjective word).

Further, processing (either in-camera or on a computer) can eliminate CA effects.  Olympus and Panasonic are well known for providing this kind of processing in-camera.  In-camera CA correction by Canon, Nikon, and Sony should be catching up shortly (if they haven't already).

4. Different lens render the out of focus (OOF) areas in a scene differently.

The highly subjective phrase of "good" OOF attempts to define something called "bokeh".

You may read arguments on the 'net about OOF of one lens or other and which gives a better result than something else.  If "bokeh" is important to you, all that matters is that out of focus areas in an image give an even distribution of light across OOF highlight areas.

Out of focus area rendition testing is quite common.  On a practical level, any desired "bokeh" effect can be reviewed and compared between various lenses.  Note that there is nothing in a lens design nor in a MTF chart which would indicate how OOF will be rendered.

The exceptions, of course, are lenses that deliberately manipulate OOF areas.  Way back in the mid-1800's OOF effects were mathematically manipulated, starting with Petzval lenses.   The optical effects are in great demand today, if eBay auction results for Petzval lenses are any indication.

In the early part of the 20th century, Contax designed their lenses to produce a "creamy" OOF.  Leica lenses, on the other hand, were and are designed in a way that tend to give a "harsh" OOF.

In current times, Nikon offers two wonderful lenses, the 105 f/2 DC and 135mm f/2 DC.  Nikon's optical team used well understood optical principles that allow a user to change the lens element spacing which directly effects OOF. Twist the ring and change the OOF.

In reading comments across the 'net, statements such as "One of the areas of image quality that MTF can help determine is bokeh..." need to be approached with extreme caution.  There is absolutely nothing in a MTF chart that meaningfully relates to "bokeh".  A whitepaper from Zeiss confirms this.

5. Field flatness, or field curvature in lenses can be an important factor in determining optical performance.

Macro lenses are typically designed to ensure a flat field.  They are many times used in photographing documents, stamps, and other flat subjects.  On the other hand, many zoom and wide angle lenses suffer from varying degrees of field curvature.  Photographers using such lenses may feel, under certain circumstances, that a lens is not "good" (to use the subjective word).

If you photograph a flat two dimensional surface, such as a painting, and see that the edges are out of focus, but that the center is correctly sharp, you may be experiencing the effects of field curvature.  In this situation you could set the aperture to f/11 (which is at or above the limits of your sensor's resolution) and try again.  If the edges come into acceptable focus, your lens might suffer from only mild field curvature that is easily handled by selecting an aperture with sufficient depth of field to cover for the effect.

If you try to use MTF charts to fully evaluate a lenses performance, you can miss something important.  Take the MTF examples in this "test".  In noting the "drop off" of contrast toward the edge of the frame, the writer suggests the performance of Canon 400mm f/5.6L is superior to Canon's 100-400 f/4.5-5.6L.  It's important to realize that most MTF tests do not account for field flatness and will limit testing to a two dimensional surface.  In this case, if there is field curvature in the 100-400L the MTF results would not accurately illustrate the lenses contrast capturing abilities on the curved regions of focus.  To the MTF test, the edges of the frame would be less contrasty than the center by a fair amount.  I am making this particular point since there is a large community of photographers who claim their 100-400mm Canon L lenses are indeed quite sharp and contrasty across the field to dispute the Luminous Landscape writer's claims.

Before claiming that a lens is "bad", a user might want to check to see what the field curvature is before tossing the optic out.

6. Lens distortions (barrel or pincushion) can easily be seen and are a nuisance to correct when straight lines are important.

Lens distortions are easily measurable and many testers report their findings.

Back in my old film days, it was commonly accepted that 35mm wide angle, some "normal" and "short telephoto" lenses suffered from field distortions.  One of the most vivid examples came from a Canon SLR shooter who used an 85mm f/1.2L to photograph trains.  The photographer complained that the lenses barrel distortion was bad enough that straight lines were nearly always bent in his images.

Shooters of architecture are well aware of the issue of distortions.  I am convinced this is why companies like Sinar and Schneider continue to make cameras and lenses.  It's important to have an accurate and correct solution when you need it, and when cost is not the prime force in image generation, such solutions can provide a most direct solution..

From a lens design perspective, it is easier to control the broad range of design issues with a symmetrical lens than it is with a complex asymmetrical optic.  Look at a cross section diagram of a plasmat lens and compare it against a low cost kit lens.  What do you see?  Count the number of lens elements in each design.  Now imagine building one?  Which would be "easier"?

With the advent of software driven lens designs, manufacturers are able to build lenses of incredible complexity, while at the same time controlling and balancing trade-offs between resolution, contrast, chromatic aberration, field flatness and optical spatial distortions.

Which brings us back to resolution.  When a photographer "pixel peeps" and claims one lens is better than another, most of the time they woefully mis-understand the camera system's imaging system and it's capabilities and actual characteristics.  Further, readers of "tests" that share photos made with various lenses may be confused or under-educated by the lack of carefully gathered and properly understood and shared information.

While this blog entry has become much more complex than I originally intended, I remain interested in making sure the proper background is set for my making the claim that it does not matter what camera or which lens you use as long as you know how to use what you have.