I recently visited the Salon de la Photo and happened to wander by the Canon floorspace. They had a huge presence at the show, and they hung very large prints. The images I looked at were made using the 18 mpixel Canon 7D. I was impressed. The prints were at least 20 x 30 inches in size. They remained wonderfully sharp, even on close inspection. I think anyone believing they need a 36 megapixel sensor to give you a nice, sharp, huge print would have been impressed, even if they had no idea what camera was used to make the prints.
The experience made me think about an article titled "Big Sticks" that I read over on the Online Photographer blog some years back. It's a great read and I liked the many points that were being made. The comment that really grabbed my attention was,
"... a reader named Stephen Scharf not long ago objected to some things I said about the size of prints you can make from various size sensors. He claimed that he could make an excellent 13 x 19" (Super B/A3) print from 4-megapixel image files..."
M. Scharf used, at the time of the article, a Canon 1D. It has a 4mpixel sensor. By current standards, that's rather small. Mike Johnson, the Online Photographer's editor, says "...As proof of concept, he sent me a print..." M. Johnson was impressed, to say the least. The print was sharp and beautiful.
M. Scharf shared his process in the article. This got me to thinking. So I took a look at what I could do along similar lines using different tools.
I wanted to test the full sequence to see if I could understand and, perhaps, match M. Scharf's processing path. If successful, I could put yet another nail in the product marketing coffin filled with half truths and outright lies.
I use the Gimp for the bulk of my image processing. Taking a close look at M. Scharf's process, I tried to find equivalent Free Open Source Software (FOSS) equivalents to the image sharpening tools he used. After watching how various FOSS sharpening methods impact one of my images, I settled on a script found in FX Foundary's toolkit.
A simple "unsharp mask" produced much too much noise in the smooth areas for my taste. Other sharpening methods gave various results, but I still saw too much noise in the smooth regions. It was after going through nearly every method available to Gimp users that I found "Luminocity Sharpen". It's under FX-Foundary -> Photo -> Sharpen -> Luminocity Sharpen
For my test, I left the Luminocity Sharpen parameters UnSharp Mask (USM) and Find Edge Amount as default. More recently, I've found I prefer setting the Find Edge Amount to 1.0,while leaving the USM defaults as is (0.5 in both cases). The difference is subtle, so you would need to test to see what you like best.
Here is the test process for the images you see here:
The results are enlightening. Indeed, if I start with a low noise base image, I can up-rez a 4592 pixel in the long dimension image file to 8000 pixels in the long dimension and retain apparent resolution. I say "apparent" because no information is being added. It is only contrast that is carefully being added to light/dark transition areas.
For this reason, you can see that the 8000 pixel in the long dimension image has slightly more contrast than the original processed image Luminocity Sharpened just once.
The reason I'm settling on FX Foundary's Luminocity Sharpen script is that it touches only the light/dark transition areas. The smooth tone areas are left clean and beautiful. There is no apparent added noise in the smooth tone regions.
Using the print size calculation I provided in an earlier blog entry, you can see we can take a Sony NEX5 (original) 14mpixel image size and enlarge it to over 30 inches in the long dimension, while retaining apparent resolution.
Obviously, this approach breaks down at some point. For this reason it is worth the time it takes to test these kinds of processing approaches to see for yourself how far you can satisfactorily push things. You might find that sensor size simply does not matter for the kinds of images you create.
There are two things illustrated here (you might want to click on the image and make sure you're looking at it full size).
First, the top row shows what happens when you take the raw original file, the processed original sized file, an up-rez'ed to 6000 pixels file, and the massive 8000 pixel monster file and view them at the same dimension of 8000 pixels. You will easily see the "pixelation" that takes place up through the 6000 pixel image.
Second, you can see the bottom row as each file is viewed at it's native size at 100 percent enlargement. You can easily see the effect of Luminocity Sharpen-ing on the three processed files. The contrast transitions between the light and dark areas are increased. The original, straight out of the camera in-camera processed image is "soft" compared to the other image samples. The simple first step Luminocity Sharpen-ing looks pretty nice and "cleans up" the light/dark transition areas. Now look carefully at the massive 8000 pixel monster file cropped section and compare it to the other files. While no information is being added, the up-rez'ed image looks pretty darned fine, doesn't it?
Original image (downsized to 1024 pixels), straight
out of the camera that used "Standard" processing
and in-camera actions. The photo was made at sunset
on the middle of les Deux Pont next to l'isle St Louis.
The camera was hand-held and the kit len's OSS was enabled.
The experience made me think about an article titled "Big Sticks" that I read over on the Online Photographer blog some years back. It's a great read and I liked the many points that were being made. The comment that really grabbed my attention was,
"... a reader named Stephen Scharf not long ago objected to some things I said about the size of prints you can make from various size sensors. He claimed that he could make an excellent 13 x 19" (Super B/A3) print from 4-megapixel image files..."
M. Scharf used, at the time of the article, a Canon 1D. It has a 4mpixel sensor. By current standards, that's rather small. Mike Johnson, the Online Photographer's editor, says "...As proof of concept, he sent me a print..." M. Johnson was impressed, to say the least. The print was sharp and beautiful.
M. Scharf shared his process in the article. This got me to thinking. So I took a look at what I could do along similar lines using different tools.
Processed original sized image, including
the first pass at Luminocity Sharpen-ing
(downsized to 1024 pixels for this blog entry)
I wanted to test the full sequence to see if I could understand and, perhaps, match M. Scharf's processing path. If successful, I could put yet another nail in the product marketing coffin filled with half truths and outright lies.
I use the Gimp for the bulk of my image processing. Taking a close look at M. Scharf's process, I tried to find equivalent Free Open Source Software (FOSS) equivalents to the image sharpening tools he used. After watching how various FOSS sharpening methods impact one of my images, I settled on a script found in FX Foundary's toolkit.
A simple "unsharp mask" produced much too much noise in the smooth areas for my taste. Other sharpening methods gave various results, but I still saw too much noise in the smooth regions. It was after going through nearly every method available to Gimp users that I found "Luminocity Sharpen". It's under FX-Foundary -> Photo -> Sharpen -> Luminocity Sharpen
For my test, I left the Luminocity Sharpen parameters UnSharp Mask (USM) and Find Edge Amount as default. More recently, I've found I prefer setting the Find Edge Amount to 1.0,while leaving the USM defaults as is (0.5 in both cases). The difference is subtle, so you would need to test to see what you like best.
Here is the test process for the images you see here:
- Process image in the Gimp to the point I'm happy with it
- Luminocity Sharpen with Find Edge Amount set to 2.0, and the USMs set to 0.5 in both cases
- Up-rez the file where Interpolation is set to Cubic" from Image -> Scale Image
- Luminocity Sharpen a second time with the same setting, Find Edge Amount to 2.0, and USMs to 0.5 in both cases
Original, processed, first Luminocity Sharpen-ed,
up-rez'd to 8000 pixels, second Luminocity
Sharpen-ing ~ This is a MASSIVE file!
(downsized to 1024 pixels for this blog entry)
The results are enlightening. Indeed, if I start with a low noise base image, I can up-rez a 4592 pixel in the long dimension image file to 8000 pixels in the long dimension and retain apparent resolution. I say "apparent" because no information is being added. It is only contrast that is carefully being added to light/dark transition areas.
For this reason, you can see that the 8000 pixel in the long dimension image has slightly more contrast than the original processed image Luminocity Sharpened just once.
The reason I'm settling on FX Foundary's Luminocity Sharpen script is that it touches only the light/dark transition areas. The smooth tone areas are left clean and beautiful. There is no apparent added noise in the smooth tone regions.
Using the print size calculation I provided in an earlier blog entry, you can see we can take a Sony NEX5 (original) 14mpixel image size and enlarge it to over 30 inches in the long dimension, while retaining apparent resolution.
Obviously, this approach breaks down at some point. For this reason it is worth the time it takes to test these kinds of processing approaches to see for yourself how far you can satisfactorily push things. You might find that sensor size simply does not matter for the kinds of images you create.
There are two things illustrated here (you might want to click on the image and make sure you're looking at it full size).
First, the top row shows what happens when you take the raw original file, the processed original sized file, an up-rez'ed to 6000 pixels file, and the massive 8000 pixel monster file and view them at the same dimension of 8000 pixels. You will easily see the "pixelation" that takes place up through the 6000 pixel image.
Second, you can see the bottom row as each file is viewed at it's native size at 100 percent enlargement. You can easily see the effect of Luminocity Sharpen-ing on the three processed files. The contrast transitions between the light and dark areas are increased. The original, straight out of the camera in-camera processed image is "soft" compared to the other image samples. The simple first step Luminocity Sharpen-ing looks pretty nice and "cleans up" the light/dark transition areas. Now look carefully at the massive 8000 pixel monster file cropped section and compare it to the other files. While no information is being added, the up-rez'ed image looks pretty darned fine, doesn't it?