Saturday, April 24, 2021

Orthographic film emulation ~ a real world scene

As I said at the start of the prior post, someone who's work I'm following and who's PhD thesis on pictorialist lenses I've closely read posted something that captured my attention.  I find this and other images of his very charming. As you can see from the EXIF it is an image made using an old single coated lens Zeiss Ikonta B film camera using Orthographic film.

As background information, Ortho film is very sensitive to blue light, minimal sensitivity to green, and zero sensitivity to red.  This was the way _all_ black and white images were before the invention of panchromatic film.  Panchromatic film is sensitive somewhat equally to all visible colors in the spectrum.  It was invented in the early part of the 20th century and is still commonly available.  On the other hand, Orthographic film tends to be difficult to find today.

After creating a digital emulation of Ortho I wanted to see how it behaved in the "real world."

Keep in mind that this is just one image.  To really "know" and understand something takes a bit of effort and many questions will not be answered in a single photograph.  However, I found the following example to be interesting.

The approach I used was to set the exposure/contrast/vignetting levels where I wanted.  Then I moved the mid-section of "Curves" up the range to lift the mid-tones and to ever so slightly flatten the highlights.  I will explain this further in a moment.

NOTE: I learned several things from a series of articles that Mike Johnson has posted over the years about converting digital color to black and white.  In digital conversions to Black and White -

  • Expose for the highlights and process for the shadows - this is the exact opposite of what you do in film photography.  In fact, digital conversions, what with modern sensors and all that, tend to show more detail in the shadows than in traditional film.
  • Luminance - Remember that the human eye perceives same energy colors differently.  For instance, we see blue darker than we see same energy green.  This is how, in black and white photography, we can begin to see what photographers call "tonal separation."  It used to be that "tonal separation" was the Holy Grail of great B&W photography and it was very difficult to control.  This isn't surprising as a simple desaturation without taking into account how the human eye sees colors can produce a Muddy Mess.  I have taken to using luminance, unless I'm working to achieve a specific "look", such as what I'm trying to illustrate with this Ortho film emulation.  With regards to "looks", a good digital B&W conversion can "look better" than film.  I know, more heresy.
  • Lift the mid-tones - I use "curves" to pull up the mid-tones and to slightly flatten the highlights.  This is, actually, one of the "secrets to success" for converting digital color to Black and White.  Why?  Because in B&W photography we printed our negatives to paper.  Paper, it turns out, lifted the mid-tones.  If paper did not lift the mid-tones, we would have had a world filled with Muddy Messes of non-luminescent grays.  Try it sometime.  Take a digital color image and convert it to B&W.  Then gently lift the mid-tones and watch what happens.  If the image is too bright, bring the "lightness" down Do not use "exposure" to do this.  "Lightness" preserves the highlights regions where "exposure" brings the entire exposure range down the curve.  Or take an old B&W negative and scan/photograph it and invert the values.  You will see the Muddy Mess I'm talking about. And then lift the mid-tones using "curves", et voila!  Immediately you will recognize print tone values.  It's magic.  Trust me on this.


Black and White digital image conversion comparison


As you can see this scene from a viewpoint at Sainte Agnes, France has muted, mixed colors.  The buildings and foreground vegetation are warm in tone.  The sky and horizon are blue.

Looking at the simple desaturate method output and comparing it with the human perception model (luminance) conversion shows what we might expect from modern black and white film as well as de-saturation converted digital color images.  The desaturate image is nothing to write home about.  The luminance conversion shows better tonal separation.

Considering the Ortho image, we can clearly see where the blue portions of the scene are lighter than in the other two conversions.  Overall, it looks as if there is more moisture in the air.  It begins to have that Orthographic film "look."

If you want to fully emulate the old Ortho film look, study where early photographers placed the exposure value and emulate that.  It can be an interesting exercise.

Wednesday, April 21, 2021

Orthographic film emulation ~ Black and White photography

Someone who's work I'm following and who's PhD thesis on pictorialist lenses I've closely read posted something that captured my attention.  I find it very charming. As you can see from the EXIF it is an image made using an old single coated lens Zeiss Ikonta B film camera using Orthographic film.

Ortho film is very sensitive to blue light, minimal sensitivity to green, and zero sensitivity to red.  It can produce a distinctive "look."  In fact, this was the way all black and white images were before the invention of panchromatic film that was sensitive somewhat equally to all visible colors in the spectrum in the early part of the 20th century.

Working in digital and using color channels we can emulate Orthographic film.  The recipe is very simple.  Set the color channels in your processing software's Black and White conversion module as follows.

  • Blue - 100
  • Green - 33
  • Red - 0

Simple as that.

In the following example we can see the original color wheels in color.  This is followed by the color wheels de-saturated.  This method is what I thought digital cameras used to generate/process in-camera black and white images.  Afterall, it's how panchromatic film (more or less) works.

Happily Sony proved me very wrong on this point.  But it is the only method available to Leica in their black and white only cameras.  In this sense Leica black and white images are no better than using old panchromatic film.

After that comes the human perception model luminance color conversion.  Remember that the human eye perceives same energy colors differently.  We see blue darker than we see same energy green.  This is how, in black and white photography, we can begin to see what photographers call "tonal separation."

As I said, Sony's in-camera black and white images don't simply desaturate a scene.  They use, instead, this human perception model conversion.  It's brilliant, actually.  Tonal separation in-camera.  Now who would've thought?  :-)

Finally, we will see how the Orthographic film emulation effects the outcome of the color wheel conversion.  Pay close attention to the visual intensities between colors.  Things change pretty obviously compared with the prior two black and white conversion methods.  Using this approach, perhaps we can begin to emulate the "look" of pre-panchromatic film images?  Let's have a look, shall we?

 

Black and White Conversion Comparison

 

Coming back to my friends work for a moment, is what makes his images charming the old Zeiss Ikonta B camera and its uncoated lens?  

Is it the Ilford Orthographic film that he uses that makes his images so wonderful?  

Perhaps, is it the processing chemicals that he's using and the subtle grain his images have?  

Is it a combination of these things, or something else entirely?

Monday, April 19, 2021

Super Resolution ~ Comparing the three methods [part 4/4]

In this blog entry I would like to do the glaringly obvious and compare sided by side the results of the three different methods I tried for creating "super resolution" images.

The three methods are Cubic Up-Rez with USM, Image Stacking, and Image Stitching methods. 

This started after reading articles on Photoshops "new" function that up-rez's an image.  This, of course, comes on the heels of Topaz AI somethingorother "super resolution" product.  The "super resolution" technique has even been applied to cell phone images.


REMINDER: Increasing image sizes using "super resolution" software products does not add information.  If data isn't in the original file to begin with, it will not be added by increasing the dimensions of the image.  This is potentially important as some software providers imply that Artificial Intelligence (AI) is being used to improve an image in ways that were not previously possible.  This is a demonstrably false implication.  Don't fall for it.


One last time, here is the base scene that I will work from.

Base Image ~ "Super Resolution" comparison ~ 2021

 

In the following comparison I show the base image as processed in RawTherapee and with "Capture Sharpen" applied.  Then I selected what I felt were the best representations, best results of the Cubic Up-Rez, Image Stacking, and Image Stitching methods.

 

Best Output of 3 methods ~ "Super Resolution" comparison ~ 2021

 

I ordered the "super resolution" results in order of preference, from best to worst.

Let's start with the worst method.  Or, in the very least, the most difficult method to manage, and that is the image stacking technique.  I've tried this method many times and I fail to see how information is added to the final up-rez'd output.

It sure seems to do a great job on smoothing out the noise, however, but I struggle to see where details to a scene are actually increased. So I'm left wondering what I've done wrong, or what I've not been careful enough about?  This approach certainly works in the Olympus and Sony products (I think Pentax offers this, too, on some of their products).

Given the poor results, I've decided that if I really really need to increase image size and if I only have one image, then the next method is the way to go.

Using the Cubic interpolation coupled with USM light/careful/undramatic sharpening to increase image size can be rather good.  

There's an important secret and this is to set the interpolation sample rate at least 2x higher than the native image dpi.  Many software set the native file resolution to 300dpi.  Therefore, when using the Cubic interpolation method, set the sample rate to at least 600dpi.  I like 1200dpi when using the Gimp.

If you are not using the Gimp to process your images (and most people do not use this Open Source Software) you will need to confirm that the interpolation filter is actually working correctly.  I have seen where too many software allow you to increase the sample rate, but then that selected sample rate is not applied (for some strange reason) and the output image ends up being "blocky" and "pixelated."

When done correctly and if you start with a "clean file" (ie: well controlled noise) the USM sharpened Cubic Up-Rez'd output looks pretty good.  This is as good, in fact, as anything I've seen from the new Super Resolution products, because, as I said earlier, those products aren't really bringing anything new to the table.

Picking at a favorite scab of mine, I've found that the Sony APS-C sensors (even the 10+ year old sensors) out-perform Canon's current Full Frame sensors at low ISO when using the Cubic USM method.  Canon CR2 raw images have a lot more noise in the shadow areas than Sony AWR raw files.

Moving on to the final, and obviously best way of making "super resolution" images, we come to Image Stitching.  This is clearly the best way of making bigger images and retain all the resolution of the cameras sensor.  There are no imaging tricks trying to increase apparent resolution, here.  We are simply dealing with native off the sensor resolution, which can be pretty darned good.

So there you have it, my recommended methods for how to increase image size.  If you have time and a subject that isn't moving, and if you need a large "super resolution" image file, use the Image Stitching approach.  If you don't have the time, but you still need a larger image file than what you can get natively out of your camera, consider using the Cubic Up-Rez with Unsharp Mask image sharpening approach.

Saturday, April 17, 2021

Super Resolution ~ Image Stitching [part 3 of 4]

Previously I covered a simple cubic up-rez + USM "super resolution" technique and image uprezing stacking + two sharpening tools.

Continuing to look at how "super resolution" images can be made, I turn my attention now to image stitching.  This is where you take a sequence of images that are smaller portions of a scene and then stitch them together to create a large image file.

To reiterate, this topic re-started for me when some folks talked about Photoshops "new" function that up-rez's an image.  There are comments that Topaz AI somethingorother is better.  And, of course, there have been comparisons showing how "good" an up-rez can be these days.  But before all this there are the original instructions on how to image stack to hopefully gain resolution during an up-rez (ie: Olympus or Sony sensor "wiggle" functions).

Here is a third way to try and gain image resolution.  Using a cameras native sensor resolution, the goal is to take a number of section images that can be stitched into an image of potentially far greater resolution.  The technique is extensible and is at the basis for the creation of "gigapixel" images.

A much smaller (and therefore much easier to manage) than "gigapixel" is the image stitching approach I use here for this demonstration.

  1. Take a number of handheld images of portions of a scene
    NOTES:
    • It can be helpful to set the camera to manual mode where you determine the shutter speed, aperture, and ISO.  This will keep the exposure consistent between images, particularly when there are brighter and darker areas that the cameras exposure system might try to compensate for as you take each section image.
    • Make sure you overlap adjacent images by at least 20percent.  Some practitioners have suggested a 50percent overlap between images.  The photo stitcher will need enough information between images to match the sections that will build the final output
    • If your subject is fairly close, you might benefit from making sure you swivel the camera around the optical nodal point of the lens.  Otherwise there will be position differences between images that the stitcher may have a difficult time matching.
  2. If you shoot RAW format, process images using the exact same actions/steps/processes. 
    NOTES:
    • Do not compensate for exposure.  Choose one set of curves or contrast/lightness/exposure settings and use these for every image.
    • Apply the exact same lens profile to all images.
    • Correct for vignetting in the lens profile, too.  This will help the image stitcher to not work too hard to keep the image to image transitions smooth.
  3. Load the images into a photo stitcher and create a large image from them smaller image sections.
    NOTE:
    • If the stitcher can write 16bit tif/psd/xmf formatted output, you can then process the image to completion using your processing software.  This can be helpful for further color corrections, applying a decent vignette, and any action that benefits from a 14bit or 16bit color depth.

 

Here, one last time, is the base scene that I tried to emulate.

Base Image ~ "Super Resolution" comparison ~ 2021

 

Here is the stitched image. 

Stitched 6 images ~ "Super Resolution" comparison ~ 2021

 

 

As you can see, it is broader than the above scene, as I took more image sections on either end of the scene.  Also note that the final output, while over 11,000 pixels long is only 5,500 pixels high.  I used a 6000x4000 24mpixel Sony NEX7 and there was just enough "drift" between the handheld image sections that I lost 250 pixels top and bottom.

 

Stitched Image ~ "Super Resolution" comparison ~ 2021

 

Now we seem to be getting somewhere.

The stitched image retains all the "Capture Sharpen" goodness that the smaller section files contain.  There's really no need to sharpen any further.

For grins, however, I did exactly that.  I sharpened this already very sharp image.  When is "more" ever too much?

An UnSharp Mask (USM) of 2 pixel width and 0.5 contrast step takes the big image resolution "over the top".  If you like the effect, then here you go.  You'll get nothing sharper.

Using the Richardson Lucy sharpened image looks even more "over the top", but it is starting to look "artificial" and "water colory."

OK.  I'm done for today.  I will reserve further comment on this approach until the next blog entry where I will try and sum up my findings from three different "super resolution" methods.

Friday, April 16, 2021

Super-Resolution - Image Stacking + Sharpening [part 2 of 4]

Previously I covered a simple cubic up-rez + USM "super resolution" technique.  In this blog entry I would like to cover a second "super-resolution" technique.  This involves up-rez'd image stacking.

This all re-started for me when some folks talked about Photoshops "new" function that up-rez's an image.  Some people think it's the cat's meow.  Others point out that Topaz AI somethingorother is better.  And there have been comparisons showing how "good" an up-rez can be these days.  

In contrast, the image stacking approach took the idea that shooting a number of images handheld would cause just enough pixel to pixel displacement that a careful practitioner could average the information when up-rezing each layered stack image and then setting each layer's opacity. The idea tries to emulate in-camera multi-shot sensor displacement and image stacking.  Olympus and Sony implement this feature on some of their cameras.

Here is the approach.

  1. Take a number of handheld images of a scene
  2. Load these images as layers into Photoshop or the Gimp
  3. Cubic up-rez - with an appropriately high interpolation filter sample rate
  4. Align the layers - this can be very tricky, but there is software that can help
  5. Set the Opacity of each layer to average the information
  6. Flatten the image
  7. Unsharp Mask sharpen or use some other image sharpening method

 

NOTE: Remember that I've chosen the Gimp specifically because the software designers have correctly implemented the Cubic interpolation function.  We will select the X/Y resolution of the interpolation filter and it will be properly applied to the image.  

This is very important as I've found that some software packages don't correctly apply the image resolution settings when they perform an up-rez and images can come out "blocky" and "pixilated" as you increase the image dimensions.

Here is what I suggest.  Using the Gimp, select...

  1. Image -> Scale Image
  2. Quality -> Interpolation -> Cubic
  3. X/Y resolution -> 1200 - this right here is the secret to success

 

Here, once again, is the base scene that I will work from.

Base Image ~ "Super Resolution" comparison ~ 2021

 

In the following comparison I show the base image as processed in RawTherapee and with "Capture Sharpen" applied.  

Then I show the Gimp output of a 4 image stack with Image -> Scale image from 6000 pixels on the long side to 9000 pixels (a 2x area increase in size) with light USM (unsharp mask) applied in selecting 1 pixels.  This is followed by the image stack sharpened with a sharpening function implemented in G'Mic called Richardson Lucy, which is much more aggressive than a USM.


NOTE: Some practitioners suggest using as many as 20 or more images to stack, up-rez, and then average.  I have tried this approach and after 3 or 4 images, I can see no improvement in image "resolution." YMMV.

 

Stacked 4 Images ~ "Super Resolution" comparison ~ 2021

 

The image stack approach really seems to struggle to add the expected "resolution" to the up-rez'd image.  The USM image is soft to my eyes, even with just a mild 2x area increase in image size.  This should be "easy", right?  Well it's not.

The Richardson Lucy sharpened image looks pretty good, but it is starting to look "artificial" and "water colory."

Unless I'm seriously missing something, the handheld multi-image stacking approach doesn't quite live up to its initial promise.  It would be interesting to see how this compares with Olympus or Sony sensor "wiggle" in-camera up-rez functions.  Should someone care to share an image or two, I'm all eyes.

What is approach does, however, is provide for very clean, noise-free output.  So, in my way of thinking, there is a definite use for this technique. I have tried this using very high ISO images where there is a ton of noise and the stacked output looked rather nice.  From what I hear, cellphones freely use this approach when making images in dim light.

Thursday, April 15, 2021

Super-Resolution - Cubic and Unsharp Mask image up-rez [part 1 of 4]

I couldn't help but notice that folks are talking about Photoshops "new" function that up-rez's an image.  Some people think it's the cat's meow.  Others point out that Topaz AI somethingorother is better.  And there have been comparisons showing how "good" an up-rez can be these days.

If you know me, you'll likely smile or laugh or possibly cringe when I say I feel there was nothing new under the sun and that new Photoshop and Topaz products are, perhaps, little more than re-packagings of previously existing functions.

 

NOTE: Increasing file sizes does not add information using the aforementioned tools or using the steps described here.  If data isn't in the original file to begin with, it will not be added by increasing the dimensions of the image.  This is potentially important as some software providers imply that Artificial Intelligence (AI) is being used to improve an image in ways that were not previously possible.  This is a demonstrably false implication.  Don't fall for it.

What Adobe and Topaz are doing is simply this.

  1. Cubic up-rez - with an appropriately high interpolation filter sample rate
  2. Unsharp Mask - other image sharpening methods - set to various "sharpening" levels

Knowing these things, we can do the very same using the free Open Source Software the Gimp to demonstrate exactly  what the pay to play companies are selling.

I've chosen the Gimp specifically because the software designers have correctly implemented the Cubic interpolation function.  We will select the X/Y resolution of the interpolation filter and it will be properly applied to the image.  

This is very important as I've found that some software packages don't correctly apply the image resolution settings when they perform an up-rez and images can come out "blocky" and "pixilated" as you increase the image dimensions.

Here is what I suggest.  Using the Gimp, select...

  1. Image -> Scale Image
  2. Quality -> Interpolation -> Cubic
  3. X/Y resolution -> 1200 - this right here is the secret to success

 

NOTE: If you use X/Y resolution of the default 300dpi, the sample rate is too low and your image will be "blocky" and "pixilated" after you increase the image dimensions.  You can try setting X/Y resolution to 600dpi if you like.  It will certainly work for 2x linear file size increases.  I prefer the 1200dpi setting as the interpolation "slices" the filter takes will be 2x finer than 600dpi and 4x finer than 300dpi.  If you don't understand why this would be the case, ask me and I will try and point you to a layman's description of interpolation filters.

Here is the base scene that I will work from.

Base Image ~ "Super Resolution" comparison ~ 2021

 

In the following comparison I show the base image as processed in RawTherapee and with "Capture Sharpen" applied.  Then I show the Gimp output of a simple Image -> Scale image from 6000 pixels on the long side to 9000 pixels (a 2x area increase in size) with light USM (unsharp mask) applied in selecting 1 pixel and 2 pixel mask widths.

 

UpRez'd Single Image ~ "Super Resolution" comparison ~ 2021

 

As you can see, if you start with a "clean file" (ie: well controlled noise) the USM sharpened output looks pretty good.  This is as good, in fact, as anything I've seen from the new Super Resolution products, because, as I said earlier, those products aren't really bringing anything new to the table.

A last note before we move on.  I've found that the Sony APS-C sensors (even the 10+ year old sensors) out perform Canon's current Full Frame sensors at low ISO.  Canon CR2 raw images have a lot more noise in the shadow areas than Sony AWR raw files.

Further, after working with Canon and Sony raw images for well over a decade, I have the strongest impression that a Sony NEX7 low ISO file up-rez'd from 6000 pixels to 9000 pixels on the long side are cleaner and clearer than a native resolution Canon CR2 file of any file dimension, even with a decent "Capture Sharpen" applied.

Heresy, perhaps?  In my case it seems to be the truth.