Sunday, January 28, 2024

Then and Now ...

One evening while casually browsing the internets and stumbling on someone waxing poetic about film cameras, rangefinders, and the "Leica esthetic" I remembered the two M3 I owned back in the early '80's.  One had an old 50mm f/1.5 and the other a 35mm f/3.5 (if memory serves).  I wanted to see if I could find that "Leica esthetic" and, relatedly, that "Leica look" for myself.

I was always looking for the "magic."  Was it in the cameras?  Was it in the lenses?  This is why I tried so many different systems.  There was a a Canon F1 (original) with four or five lenses, and a little later I had a Pentax MX and after that a Nikon FM system.  

la traversee de Paris 2024

As for the "Leica look" and "Leica esthetic", other than the interesting rendering of the Leica 50mm f/1.5 when shot wide open I found my SLR images of the day to be very similar to the rangefinder.  It took me many decades to realize the "magic" was the nut behind the viewfinder.

Which all lead to another late evening musing.  I wondered how the sizes and weights of film camera equipment 40 years ago might compare to digital.  So I did a quick spreadsheet of sizes and weights.  Just because.  Late evening musing. Right. Here it is.


Cameras Weight Length Height Depth
Canon F1 820grams 147mm 99mm 43mm
Leica M6 575grams 137mm 77mm 40mm
Sony A7 474grams 127mm 94mm 48mm





Lenses Weight Length Diameter
Canon FD



Canon 20mm f/2.8 FD SSC 345grams 75mm 58mm
Canon 24mm f/2.8 FD SSC 330grams 66mm 53mm
Canon 35mm f/3.5 FD SSC 325grams 64mm 49mm
Canon 50mm f/1.8 FD SSC 200grams 63mm 39mm
Canon 85mm f/1.8 FD SSC 425mm 67mm 57mm





Leica



Leica Super-Elmar 18 f/3.8 309grams 49mm 61mm
Leica 24mm f/3.4 Elmar 260grams 40mm 56mm
Leica Summarit 35mm f/2.5 220grams 43mm 51mm
Zeiss 35mm f/2.8 178grams 30mm 51mm
Leica 50mm f/2 Summicron 242grams 44mm 53mm
Leica 90mm f/2 Summicron 635grams 102mm 66mm
Leica 90mm f/2.8 Elmarit 395grams 76mm 55mm





Sony



Tamron 20mm f/2.8 220grams 64mm 73mm
Sigma 24mm f/3.5 DG DN 225grams 64mm 51mm
Sony 35mm f/2.8 ZA 120grams 37mm 62mm
Sony 55mm f/1.8 ZA 281grams 71mm 64mm
Sony 85mm f/1.8 371grams 78mm 82mm

Comparing the old film camera dimensions to a 10 year old Sony A7 full frame device quickly shows how shapes have evolved. The Canon F1 was a rather hefty device, even back in the day.  The Leica M-series was built rather like a brick (or so it seemed to me).  If one preferred their SLR to be light and compact, Pentax made those wonderful M-series cameras and Olympus offered the OM-1.

Leica lens dimensions remain comparatively small, with the Sony digital lenses I choose being nearly as compact.

Comparing lens weights shows a couple things.  Digital lenses can weigh less than old rangefinder optics.  SLR lenses are monsters compared with these two, but we already knew that, right? Similarly, if one wanted to save size and weight while sticking to SLR bodies, Pentax and Olympus both offered some interesting things.  I imagine there's a reason why lenses from Pentax and Olympus remain popular among the "focus peaking" crowd.

The obvious thing that has evolved over the years is capability and flexibility.  Film was a one-trick pony.  The ASA and color/monochrome selections were made when the film that was loaded into a camera.  Lenses were strictly manual focus.  How we used these old systems necessitated anticipating the needs of the situations we found ourselves in.

Come to think of it, that was really good training for moving into digital.  Anticipation and planning can be helpful.  It forces me to think through a situation in advance, rather than walking into a scenario flat-footed and having to react.  Reacting leaves me all fumble-fingered and confused.

Wrapping up my late evening musings, it's absolutely remarkable how much technology we now have available to us, even as the size and weight of things have pretty much remained constant.

Tuesday, January 23, 2024

Sony A7 (original) tethered to Linux...

I need to document how I tethered a Sony A7 (original) to my Linux laptop.  It's too easy to forget the details since I don't do this often enough for it to stick in my brain.

Process ~

  1. Install "Entangle" (the software manager for my Linux installation was able to find this quickly and install it easily)
  2. Turn on Sony A7 and menu dive to "USB Connection" and set to "PC Remote"
  3. Connect Sony A7 to the Linux computer using an appropriate USB cable (it's the same one I use to transfer files off the SD card in the camera onto the Linux device)
  4. Start Entangle and verify the camera is connected
  5. Everything should be intuitive afterward.

Details - 

  • Entangle was set to delete images in the camera after they were downloaded. Entangle Settings allowed me to leave the images in both places, computer and camera.
  • Entangle downloads images are saved to home/<login_name>/Pictures/Capture  I need to look at how to change the destination directory.
  • I haven't yet figured out how to get the Live View image to display in Entangle.  Until then, I need to use the LCD or viewfinder to frame the scene, then use Entangle to manage the camera settings and shutter release.

I think there's a way of connecting the output (saved images) from Entangle to Darktable.  Since I don't use Darktable I don't know the details.  I'm still looking at RawTherapee to see if I can connect it to Entangle.  Though it's not really a problem as long as I have RawTherapee up and running I can see the Entangle downloaded images and edit them from the "Capture" directory.

 

la traversee de Paris 2024

Upsizing ~ a recipe using the Gimp and G'Mic

Yesterday I sent a 100mpixel image to a friend to have them inspect it closely and to pixel-peep to their hearts content.  

 

la traversee de Paris 2024

 

The photo started as a 24mpixel Sony A7 image of a motorcycle.  It's a rare beast, the motorcycle, and I used a fine 20mm lens on the tripod mounted A7.  I'm imagining just how glorious a large print might look in going from the native 6000x4000 pixel file size to 12000x8000.

I shared the image with this particular friend because they shoot Fuji GFX 100mpixel cameras.  If there is something amiss, they would spot it.

Their verdict? 

"The detail is just mind blowing really ! ! ! !... no arguments on this end... "

An hour later a pretty little Sony A7R came up on one of my favorite shopping sites for a rather decent price.  Should I get it? was the question of the evening.  Based on my friends reaction I think it's pretty clear the answer would be no if I was thinking in terms of "improved" resolution over what I already own.

Here's the recipe I used -

  1. Open the image in RawTherapee, Capture Sharpen and process the 24mpixel image to taste
  2. Open the image from step #1 in the Gimp and continue with...
  3. G'Mic DCCI2x upsize (found under the "Repair" tab)
  4. G'Mic Inverse Diffusion sharpen set to between 5 and 10 iterations (found under the "Details" tab)
  5. G'Mic High ByPass filter applied to a layer copy of the image from step #4 (also found under the "Details" tab)
  6. Set the High ByPass filtered layer blend mode to "Soft Light"
  7. Flatten and save the result

A couple comments about the upsizing choices I made - G'Mic DCCI2x does a very good job avoiding pixelation commonly seen on diagonal hard/sharp edges with other upsize operators.  Similarly, I've found that Inverse Diffusion does not over-emphasize pixelation, even when using up to 10 iterations, un-like USM, or Octave or Richardson-Lucy.  Finally, for the continuing avoidance of pixelation reasoning, applying a High ByPass filter layer that is blended in "Soft Light" seems to do the trick.

Occasionally I've found applying a light Noise Reduction in step #1 with RawTherapee and again after step #3 in the Gimp can help keep colors "true" along contrasty, sharp edges.  For the motorcycle image I didn't need to apply any NR.

As always, if something isn't clear, let me know and I'll try to do a better job explaining. I see that the first edit of this entry wasn't clear, so it's been updated to correct my mistakes.

Thursday, January 18, 2024

Color Management ~ a recipe for RawTherapee

After having slogged through all that color management and "color science" madness I've settled on a simple, pleasing color processing recipe.

I use Open Source tools (RawTherapee and the Gimp) running on MintOS Linux, so the following might not directly apply to proprietary software nor RentWare processing applications running on Apple or Microsoft systems.  

Also, as I refer to the dcp color management file, many places refer to this as a "camera profile."  I'm just trying to be specific so that I don't confuse ICC-based color management systems with dcp-based software.

In RawTherapee -

  1. Open RAW file
  2. Let RawTherapee demosaic the image (which is will do for you before displaying an image)
  3. Apply lens corrections
  4. Capture Sharpen, if the ISO is low enough, otherwise apply an appropriate level of Noise Reduction
  5. Apply a camera-specific dcp in Color Management
    1. Select: "Tone Curve" (this one is particularly important for the rest of the process)
    2. Select: "Look Table"
    3. De-select: "Base Exposure" (this will make fewer Exposure modifications possible in the following step...)
  6. Exposure slider to set the overall scene brightness (being careful to avoid clipping any of the channels)
  7. Gentle adjustments of the Luminosity channel to fine tune contrast and brightness (again, being careful to avoid clipping any of the channels)

I'm not yet clear on if steps 5 and 6 should exchange places (see Update below).  It might be the case that stretching and changing the color space (exposure, brightness, contrast) changes how the dcp color map onto an image.  This is something I'm still considering, though I've not yet been able to confirm any difference where color management is applied (as step 5, or moved down as step 6 after the exposure is set).

For the way I've written the recipe, enabling "Tone Curve" in Color Management and disabling "Base Exposure" does the heavy lifting of setting a decent starting point for exposure.  The curve is embedded in some (many?) dcp files.  Of course, if you don't like the look of it you can disable it in Color Management and use the Luminosity channel exclusively.  De-selecting "Base Exposure" gets around RawTherapee's inability to match in-camera jpg tones.

Using "Standard", "Film Like", or "Perception" curves can modify colors are brightness and contrast are changed.  This is why "Luminosity" is so useful.  It changes brightness and leaves the dcp color alone.

If you don't already have a decent dcp, this person has a downloadable zip for RawTherapee that comes from the big RentWare provider.  I think these are beautiful and are much better than any of the dcp files I created.

 [Day +1 Update: I measured the colors in steps 5 and 6 and remeasured them when swapping the sequence of these steps.  There's a small difference in color.  So what I've taken to doing is running the process as written, then going back after steps 6 and 7 (when I'm done processing for exposure and contrast) to re-run step 5.  I do that by selecting "Camera Standard" and then selecting "Custom" where the dcp files are that I use.  This sets the colors to the processing levels I intend and avoids the small/minor color shifts that are introduced as the Luminosity channel Exposure and Contrast are manipulated.]


la traversee de Paris 2024

Wednesday, January 17, 2024

Random Thoughts on Photography [10] ~ Getting to "linear"

A friend emailed me recently and asked if I knew about linear profiles?

I had to stop a moment and think about it, because I'd not heard it referred to in this way.  

People are working to get to linear curves into certain RentWare.  Remember my comments on "closed systems"?  There are long discussions on the topic of "linearity" in image processing.

If I understand correctly, the re-linearization of RentWare is through the application of a "profile."  The goal being to retain as much highlight tonal separation as possible.  It appears that someone realizes how RentWare strongly raises highlight tones and wants to "re-correct" that.  Raising highlights to make an image look "good" flattens tonal separation.  For example, in landscape photography, think clouds.

Further, if linear profiles are actually camera profiles applied at the initial color management stage, then these must be working to counteract the effects of the software.  Some RentWare uses dcp format files which can have a "curve" built-in.  Perhaps in those cases the camera profile does not specify a "curve."  As you can see, most of this is wild speculation on my part.

In my little Linux/RawTherapee/Gimp world, struggles most often revolve around learning and understanding as there are often fewer and different limits placed on the user.  Generally, I can figure out how things are done in proprietary software and RentWare by studying various on-line guides and such.  So I set out to try and understand this problem of linearity in RentWare.

One of things I learned about processing black and white images in RawTherapee is I get the best results when I don't use any of the automated selections that attempt to make an image look good.  Rather, I take the center of the curve and raise it until the mid-tone and highlight relationships look correct to my way of seeing.  Then I mess around with the "lightness" and "contrast" sliders.

Looking closer I saw that RawTherapee set their starting point at a "linear curve."  This is the default/baseline post-demosaic and post-color-management state.  The man page says...

Linear curve ~ This represents the unaltered (or linear) image, so without any tone curve applied. It disables the curve...

No wonder I didn't understand what the problem was in RentWare and the need for "linear profiles."  I already have and frequently start from a proper linear/flat curve/profile.  As far as I can tell, the problem of linearity doesn't apply to my particular process flow.

Beating a dead horse, RentWare and proprietary software work to give us a good looking image in as few steps as they feel they can get away with.  In so doing they take away a far amount of control early in the pipeline control.  Taken to it's (ill?)logical end, mobile phone image processing takes us even further down the image processing pipeline before it hands control over to us, the user.

One last thought: While I can't really comment on how efficient RentWare re-linearization is, I wonder how much color distortion is being introduced into the process?  If these are dcp format files applied at the color management stage, the RentWare linear profiles probably are just fine.  I couldn't help but ask, that's all.

 

la traversee de Paris 2024

Tuesday, January 16, 2024

Random Thoughts on Photography [9] ~ the horror of "Curves"

Recently I took a quick look at color management.  

 

la traversee de Paris 2024

 

The science of human perception in colors is vast and by now pretty well understood.  You can read various studies and findings here on the internet.  You can also find standards that have been developed based on human perception models of colors and luminescence, so I won't try to bore anyone with the details.  

Of course there are many ways of "getting there from here."  There are algorithms that implement various aspects of color science.  Unfortunately these are the kinds of things that are for better or worse hidden from consumers to a certain level.  On the other hand camera manufacturers and RentWare do their best to present us with "pleasing images" by making early decisions for us.

Back when dinosaurs roamed the earth and film was what we used to make a photograph, we were completely dependent on what manufacturers produced.  It was their job, the manufacturers that is, to find ways of pleasing us the consumer so that they could sell more product. Now we have software that does similar things, trying to simplify as much as can be simplified, while offering the consumer some sense of control over the final "look" of their images.  

Rarely do we have access to something that offers us complete control, from the yet to be demosaiced RAW to fully color managed and ready for additional user input stages.  It's rather like watching an alchemist mix up egg white/ether/gunpowder/silver nitrate on their way to coating a glass plate in the wet-plate collodion process.  When dry plate and film were introduced all the "dirty work" is taken care of for us and we didn't have to be alchemists to make a photograph.

In trying to transfer image processing settings from one system to another I noted that camera manufacturers and RentWare companies tend toward "closed" products.  That is, they can "bake" into their image processing software their own "special sauce" and it is very difficult to understand what's going on behind the scenes.  I can imagine this is confusing and might have led to the rise of on-line discussion forum "color science" wars.  No one really knows much of anything, but people seem to have a lot of ideas.

RawTherapee is a software for image processing that doesn't hide much, if anything.  I'll bet this is one of the reasons people tend to avoid it.  Yes, it's Open Source.  Yes, it's free (as in liberated).  Yes, it can be very very complex.  But, if one understands what they're doing, many of the "icky bits" can be streamlined into a simple one button process.  In fact, the application provides many pre-configured processes.  I've found it a feature rich software.  The engineers seem to really know what they're doing.

One of the things I noticed while trying to understand the demosaicing and color management process was that once I got through these stages that there are several different kinds of "Curve" mode algorithms.  Ooph.  More to learn and understand.  Will this never end? Possibly not.

Girding the loins, I set out to fill in my knowledge gaps about "Curve" modes.  Here is what RawTherapee's man page has to say.

Curves ~

Standard ~ ....the values of each RGB channel are modified by the curve in a basic "correspondence" method, that is the same curve is applied to all channels...

What I've experienced in using this "Curve" is that manipulating the luminosity curve equally manipulates, and quite strongly, the RGB channels.    Changes in luminosity and contrast directly effect color saturation.  Strongly increased contrast yields strongly increased color saturation.  This is "un-natural" in terms of human persception of color.

Strong "Curves" manipulation can distort information and I have to be very careful to guard against loosing subtle tonal gradations in shadow and highlight regions.  Interestingly, this is the classic "Curves" algorithm that I'm sure everyone knows and loves. We've all gotten rather used to these effects and have learned to either work around them or embrace them as part of our images "look."

Weighted Standard ~ ...use this method to limit the color shift of the standard curve...

The algorithm decouples by using a different algorithm the behavior of the RGB channels in relationship to the luminosity curve.  Color saturation changes are much more subtle, in fact.  More subtle than the following "Curve" type, "Film-Like".  Weighted Standard looks and behaves more similarly to "Luminance" and "Perception."

Film-Like ~ The film-like curve provides a result highly similar to the standard type (that is strong saturation increase with increased contrast), but the RGB-HSV hue is kept constant - that is, there are less color-shift problems. ..

This is pretty interesting, actually.  Color saturation shifts remind me very much of film.  I still have to guard against subtle tonal loss in the highlights and shadows, but I've found "Film-Like" to be a little more manageable than "Standard."

Saturation and Value Blending ~ This mode is typically better suited for high-key shots...

This mode is a strange one to me.  I read the description and can't make heads nor tales of what's going on, which means yet another learning opportunity is presenting itself.  I guess I just don't shoot enough high-key to know how best to use this "Curve."

Luminance ~ Each component of the pixel is boosted by the same factor so color and saturation is kept stable, that is the result is very true to the original color...

And here is something that I'm still coming to grips with.  Using  "Luminance" or the next mode "Perceptual" which show "true to the original color" illustrate for me just how bland the real world is.  I begin to see just how much color distortion we've become accustomed to.

Coupling "Luminance" with LAB color space functions can bring an image more in line with "Standard" and "Film-Like" saturated color images.  But there's a subtle twist.  Highlight and shadow areas are controlled in a LAB color space and can retain more/different information.  This is an interesting option for potentially unique image processing.

Perceptual ~ This mode will keep the original color appearance concerning hue and saturation, that is if you for example apply an S-curve the image will indeed get increased contrast, but the hues will stay the same and the image doesn't look more or less saturated than the original...

If I really need saturation, there's always either the traditional RGB saturation slider or the LAB colorspace "chromicity" slider.  This mode looks similar to "Luminance."

This is a lot of information to digest, particularly if one isn't used to so many options.  What I've taken to doing is working with an image and trying each "Curves" mode to see what it does, and as a learning exercise to try to match "Standard" and "Film-Like" output.

For instance, I recently re-worked an image that I'd first processed using "Standard".  This time I worked it using "Luminance" curves mode.  I pushed hard on the "Chromaticity" LAB color space slider to get a similar looking image.  But with "Luminance" I was able to retain good highlight and shadow detail that I found to be more properly rendered.

In another instance, I worked with "Film-Like" and used gentle curves manipulations.  The output of that process is fairly "natural".  Using "Luminance" on the same image, again using subtle curves manipulations yielded what felt like a color-drained scene.  Again, it illustrated to me just how much we've come to accept wildly saturated images as "normal."

I have a few more things to share about curves and such.  But I'll save them for another time.  Until then, I will continue to work with RawTherapee's curves modes and see if I can begin to match the various proprietary software outputs.  As a further learning exercise, of course.

Saturday, January 13, 2024

Random Thoughts on Photography [8] ~ color management and color "science"

Chateau de Sceaux ~ 2023

 

I've been thinking and poking and looking at the topic of color management in light of manufacturer claims.  Certain camera companies claim to have a "secret sauce" that is called "color science."  I wanted to consider this and see if there might be ways of pollinating these much vaunted "color sciences" across other manufacturer flowers, er, cameras. 

Background ~

I smile as I remember the battles that were waged on-line over who had the "better color science", Canon vs Sony.  I still read comments from Fuji users who've fallen in love with Fuji's film profiles.  Leica users wax poetic about a certain "Leica look."  And for years I've read how Hasselblad has the "color science" of any of the commercial camera manufacturers.

Is it real or is it marketing blather?  Perhaps only real "color scientists" know for sure.

On a practical level and before I dove into this topic, what I experienced was that my old Canon images are always redder than my very neutral looking Sony output.  Reviewing carefully Fuji X or GFX images I can't see where they are demonstrably different nor better than any other products on the market.  I can't look at an image and "see" any Leica Magic coming out of Leica products.  Nor am I convinced that Hasselblad has a better understanding than anyone else in the industry about how to make colors.

Yes.  I'm a "color science" skeptic. Why?  Well, it's a very long explanation.  So sit down, strap in, and hold on tight.  Here I go.  Yet again.  Bashing some drum or other.  This time it's "color science."

In-Camera Pipeline ~

Here is how I understand digital imaging to work, from start to finish.

  1. Light hits a light sensitive diode
  2. Diode spits out a tiny electrical signal
  3. Hardware applies gain to amplify the signal (ISO sets the amplification level) 
  4. Amplified signal enters an analog to digital converter (ADC)
  5. Digitized information exits the ADC (may receive additional signal processing as in the cases of extended ISO settings)
  6. Now there is a fork in the process ~
    1. Digital information passes through an ASIC that processes digital data and write it into an in-camera generated JPG file
    2. Digital information is written into a RAW file

We now have two file possibilities, RAW and/or JPG. 

A JPG file is the result of what the manufacture implements, purely and simply.  It has passed through an in-camera de-mosaicing algorithm of the manufacturers specification as well as other imaging algorithms (where various "looks" can be applied).  

JPG image processing decisions have been made by the manufacturer, so if there's a special "color science", it can be applied here.

Off-Camera JPG Pipeline ~

It is potentially important to note that further processing on a JPG will start with the  limited color range of 8 bits as found in the file. Heavy processing can distort colors, destroying any special "color science" that had been "baked" into the JPG file.

Off-Camera RAW Pipeline ~

RAW files require additional processing.  If your software has the possibility of displaying true RAW file information, you'll see a strongly green cast mushy image sort of thing that has red and blue dots evenly sprinkled around the image field.  In short, it's an ugly mess and it's easy to see why we need software to sort things out for us.

Here is the Bayer RAW format pipeline.

  1. RAW file is loaded into software
  2. De-Mosaic is most-often the very first function performed on a RAW file, where individual "pixels" of RGGB (red, 2x green, blue) information is processed using surrounding "pixel" data to calculate color and luminosity such that there is now a different kind of information (de-mosaic calculated) for each "pixel", changing each and every original red, 2x green and blue locations into full color with brightness
  3. De-Moscaic'd images still look ghastly as the colors are still "off", so...
  4. Software "massages" a de-mosaic'd image and displays it to the user
  5. User can now begin performing additional modifications using the tools the software provides

Where's the "color science" in RAW image processing?

There are a couple candidates for where a manufacturer can add their own "special sauce."

The first place we can ask "what has a manufacturer" modified? is the de-mosaic stage.  From what I've read, this step would be an incredibly difficult place to add a dash of "special sauce".  The algorithms are complex and varied.  Have a look at the de-mosaicing possibilities in the Open Source Software RawTherapee and you'll see what I mean.

It turns out there are very good reasons to use the various de-mosaicing approaches, but if you are a RentWare user, I suspect you have zero control over the de-mosaic stage.  Please tell me if I'm wrong.

In any event, I feel that "color science" is not applied at the de-mosaicing stage.

The next stage is where things can get a little more interesting.  This is where camera (better said, sensor model) specific color management comes into play.

Some RentWare and Open Source software use .dcp format files to store camera-specific color corrections.  Hasselblad embeds camera-specific color corrections in proprietary XML formatted files.  I'm not sure what Sony, Canon, Nikon, Fuji, Olympus, or Panasonic proprietary software use, but rest assured, both de-mosaic and color management functions are "baked" into them.

In nearly all cases, camera-specific color corrections are hidden from the user.  However, there are instructions on how to generate .dcp color management files for RawTherapee.  Reading these instructions is insightful and related directly to the question at hand.

Taking industry standard color charts, camera model to camera model variations can be corrected.  If you use RawTherapee, for example, have a look at the Color Management tool and compare the colors between "Camera Standard" and the automated camera specific colors and no corrections.  The differences are illustrative.  Then look at properly color managed files from different cameras and tell me where you see any differences.  Using non-manufacturer software tends to eliminate the effects of any "special sauce." 

If a camera manufacturer is to inject some special "color science" during RAW image processing the color management stage would be an excellent place to do so.

What Else? ~

It is _after_ the de-mosaicing and the subsequent color management stages that color look-up tables can be applied.

Color grading (not to be confused with color management) can be a one button exercise in many software.  Many times these can be implemented as LUTs.  Video software as well as still photography RentWare come with many LUTs pre-installed.

With proprietary and RentWare software you get the de-mosaiced, color managed file displayed after RAW file import.  Then there is this  thing that they provide that adds a special "look."  For Sony it's things like "Neutral", "Standard", "Vivid", "BW", etc.  That's the color look-up table (LUT) stage.

What's potentially interesting here is that as a user you can generate your own LUTs.  Of course you can buy them, too.  What's nice is that the user controls the output.

Last Question ~

Might it be possible to somehow "borrow" from one software to another and preserve a manufacturers recipe?

Maybe.  It depends on several things.  If we can import an industry standard color diagram and have a manufacturers color management applied to it, then yes, there's a possibility we can duplicate a "special sauce" and move it to another software.

Yet I have my suspicions.  I see manufacturers have generated closed system software where the early image processing steps are hidden from the user.  We don't know exactly what parameters are being used and we're in the dark about many potentially important details.  So, in the end, I'm not sure how successful we can be, but we might be able to get surprisingly close.

Another approach could be to open a manufacturers proprietary software with an image who's processing we like, then next to it open another software and change processing settings until the image look the same, then save the actions or generate a LUT.  This is much more "hit and miss" and would not be very accurate.

By hiding what manufacturers are doing behind the wall of proprietary software they can make any marketing claim they want.  To make matters worse, RentWare does exactly the same thing. There is no way of verifying any of these claims.  Further, such claims can then be picked up by communities of photographers where they can take on a life of their own.

On the other hand, what I'm experiencing these days is that by more deeply understanding each and every step in a RAW file processing pipeline that I am at liberty supported by increasing knowledge to create images that are pleasing to me.  Matching one companies "color science" to something I'm working on has become a fun and interesting challenge and I'm very happy when I get very very close, if not precisely spot-on, to something that's been "secret sauced" hidden behind a proprietary/marketing "color science" wall.

Wednesday, January 10, 2024

Random Thoughts on Photography [7] ~ SuperResolution, a better solution

I spent a bit of time looking at single image and multiple image stacking UpScaling.  As a friend suggested several years ago, there's a simpler way.

If one has the time, image stitching for true SuperResolution is a very fine solution.  Having a tripod helps.

Consider the following.  I used a Sony A6300, a Sigma 19mm f/2.8 EX DN E, a tripod, and a small "L" bracket that I made before moving to place where access to materials and tools is very difficult to non-existent.

There's a trick to using the "L" bracket and it's simply this: I have to make sure I put the pivot point directly under the lens' nodal point.  In practical terms, make sure the tripod head mount screw sits under/aligns with the lens aperture location.  When done in this way and by swiveling the tripod head to capture overlapping sections of the scene, image stitching is a breeze and all image elements quickly align.

The dimensions on the long side is well over 9,000 pixels.  100% crops are along the top.  The full scene is under the crops.  There's no lack of definition/resolution.  

My friend was, of course, correct.  There is a simple/inexpensive way of achieving SuperResolution and this is it.

 

Image Stitching for Increased File Sizes

 

Tuesday, January 09, 2024

Random Thoughts on Photography [6] ~ Sigma 24mm f/3.5 DG DN vs Nikon Nikkor 24mm f/2.8 Ai

I've had a nice Nikon Nikkor 24mm f/2.8 Ai up for sale for quite some time now.  The reason is that last year I decided to put together a full frame kit based on AF lenses.  So I sold nearly all of my manual focus Nikkor glass.  I'm getting old and focusing the MF optics, no matter how brilliant they are, present focusing challenges that I'd rather not spend my time dealing with.  So, home came a Sigma 24mm f/3.5 DG DN to replace the Nikon 24mm.

Not having compared the two lenses I thought I ought to, just in case I discovered a "need" to hang onto the Nikkor.  So using a couple lens test charts that I printed I set out to see whatever there was to see.

Mind you, I'm not measuring anything.  Other people do a better job of such things than I.  However, by comparing one lens against another using the exact same setup I should be able to illustrate any differences.

What I find is that the 

  • Nikon, without software intervention, is a little soft in the corners compared to the Sigma, but...
  • Nikon appears to have just a hint of something sharper/contrastier in the center of the frame.  Maybe.
  • Sigma has software intervention correctable pincushion distortion 
    • There seems to be little to no impact on "resolution" in correcting the Sigma's distortion in software
       
  • RawTherapee software sharpener intervention called Capture Sharpen pretty much levels the field

Recently I had a Wild Hair to try IR imaging and the Nikkor would be perfect for that.  It has an IR focus mark, whereas with any AF lens I have would be a guessing game as to where critical focus is in the IR range.  So, if I can't/don't sell/trade away the Nikkor perhaps I'll eventually be able to put it to good use.

Until something happens the old Nikkor rests peacefully in the toy closet.

 

Sigma 24mm f/3.5 DG DN, Nikon Nikkor 24mm f/2.8 Ai ~ Comparison Capture Sharpen

 

Sunday, January 07, 2024

Random Thoughts on Photography [5] ~ 50+ year old lens vs very recent

One of the side-trips I took while twisting and turning around a question of how best to upsize images was to take a slightly different look at comparing two lenses.  One 50+ years old.  The other is fairly recent.  It is Nikon Nikkor-S 50mm f/1.4 (c.1972) vs Sony Zeiss 55mm f/1.8 ZA.

The Nikkor-S was Nikon's first 50mm f/1.4.  It's a plasmat design, single coated, manual focusing, and is about as simple as they can get.  They can be fairly inexpensive, if you keep your eyes open.

The Sony Zeiss 55mm f/1.8 ZA needs zero introduction.  I've taken to wandering about with this lens mounted on an old Sony A7 (original).  Used prices have fallen precipitously.  I found a mint example for rather reasonable money..

The difference for this comparison is that I printed a few lens test charts and taped them to a wall.  The comparison setup was what I've been using for years - tripod mounted camera, 2 second delay timer, 100ISO, Aperture mode, processed in RawTherapee, etc.

Comparing results and starting at f/2.8, can anyone tell a meaningful difference between these two lenses?  Seriously.  At f/4?  Or f/5.6 on down through the aperture range?  

Add a bit of Capture Sharpen and what do I have?  Insanity, I tell you.  Insanity!!  Can't get any sharper than this, can we?

OK. Clearly (har!) there are differences at wide open and at f/2.  The Sony is gorgeous and I imagine this is what 50+ years of consumer optical design will get you.

On the other hand, wide open and at f/2 the Nikkor looks like an early Voigtlander Heliar f/4.5 large format lens.  Soft-ish wide open, but with sharpness underlying the veiling spherical aberration.  There has to be a way of taking advantage of this, right?

Just as importantly, could we say the old Nikkor-S is still fully usable, 50+ years on?

 

Capture Sharpened ~ Sony 55mm f/1.8 ZA vs Nikon Nikkor-S 50mm f/1.4

Friday, January 05, 2024

Random Thoughts on Photography [4] ~ SuperSize Me!

Upsizing articles are targeting me.  I know it.  It's clickbait.  Yes.  But I can't help myself.  I read an article that had my mouth wide open, ready for the hook, line, and sinker.

Could it be true?  The images in the article were rather small, so it was difficult to tell.

I've been looking at this question for years.  It's appealing to take a single image and upsize it, while retaining a certain pleasing sense of "sharpness."  The thing that had me blocked was pixelation.  Cubic, NoHalo operators found in the Gimp are good, but I never was able to get rid of certain forms of pixelation.

Thinking that "SuperResolution" stacked imaging might provide the answer, I worked hard on the process until I was finally able to declare success.  But that process did not remove pixelation any better than Cubic and NoHalo on single or stacked images.  There was always some little raggedy edge somewhere that caught my attention.

Before going to sleep I often look around the 'net looking for solutions to various photographic problems that I encounter.  Lighting, composition, the physics of diodes, and such silliness.  One night I happened to put into the search engine a properly formatted question and lo and behold, a possibility.  An answer.

Back in 2015 there was a conversation in the G'Mic discussion forum about the very problem that had me stumped.  Someone gin'd up an implementation and suddenly DCCI 2x was incorporated into the G'Mic suite of tools.  I already added many years ago G'Mic to my Gimp toolset.

DCCI 2x upsizes bi-directionally, one scan 90 degrees from the other.  There is no choice on how much upsize one gets.  It's fixed at 2x and this clue is embedded in the name of the function.  But there's nothing stopping anyone from doing two 2x upsizes back to back, or more, should they want.  

In any event, pixelated edges on upsize seem to have vanished into thin air.

So I set up a scene similar to the clickbait that had me.  Using a tripod, setting the ISO to 100 and aperture to f/5.6, the 2 second timer lit off on frame 70cm from the subject to the camera.  Then I backed up the tripod another 70cm and took another photo.

There was now one 24mpixel image, and another that was equivalently 12mpixel (when cropped to the closer image field).  Image processing was pretty straightforward.  For one of the images I did not use Capture Sharpen in RawTherapee.  But for the other three Capture Sharpen was applied.  The 12mpixel (equiv) images were then opened in the Gimp for further processing.

For one of the DCCI 2x (G'Mic) images I added de-noise on the DCCI'd image, then added a hi-bypass filter (G'Mic) as a first sharpen step, then added a single pass Richardson Lucy sharpen (G'Mic).

For the second DCCI 2x (G'Mic) image I created a Luminosity Mask to isolate the edges of the various elements in the scene so that the next step with Richard-Lucy sharpen (G'Mic) would act locally, and then, finally, I did a light global sharpen using Inverse Diffusion (G'Mic).

What did I find?  The clickbait isn't entirely full of BS.  It's not quite what was promised, as I can see differences between the standard images and the DCCI'd work.  The original size images are clear and crisp and filled with detail.  Pixel-peeped DCCI'd images are just a touch softer, even after several rounds of sharpening.

Further work would certainly improve the DCCI'd photos.  On the other hand the DCCI'd photos are already surprisingly pretty close to the 24mpixel no Capture Sharpen output.

What the recipe came down to is simply this -

  • Starting with a sharp image
  • Capture Sharpen to reveal as much detail as possible (PS does this in their SuperResolution tool)
  • DCCI 2x the image (in PS is the upsize selectable?)
  • Reduce noise as needed (PS has this option, too)
  • Selectively sharpen (not sure how PS handles this, if it's global or if it's local to the edges of a scene)

Sharing my homework - 


 UpSize Comparison - Z

Thursday, January 04, 2024

Random Thoughts on Photography [3] ~ SuperSize yet some more

When I read this PetaPixel article I thought, well, maybe here's a justification for picking up a really cheap Sony A7S.  The clickbait title had me at "Super Resolution Eliminates the Advantage of High-Megapixel Cameras". 

If SuperResolution is a real "thing", then why spend money on more megapixels?  Besides, I could use high ISO performance and an all electronic shutter for some of the things I do in museums.  I don't want to disturb the fine citizens and visitors with a clacking shutter.  And something that didn't get filled with noise would certainly make image processing easier.  So I set off, AGAIN! to examine what's possible and to try and understand what's really going on.

The goal of all this thrashing was to see if there was a way of using Open Source software and match the perceived performance of something for pay, like Topaz's Gigapixel AI (or whatever they call it these days).

You see, I have more than a small beef with for-pay software, and particularly for-rent products like anything Adobe, or Capture One, or the other rentable image processing software that's out there (and there's a surprising amount of it).

I go back to a time when you could buy something, cash on the barrel head, and walk away with something that was durable and would last for decades.  Mechanical cameras.  Stereo systems. Music (vinyl or CD).  You didn't rent anything if you could avoid it, other than an apartment.

Working with Capture One was frustrating.  I had been using their image processing suite and left as soon as I realized they were transitioning to rent-ware.  It drove me nuts.  

I submitted bug reports and, while the bugs were quite valid, I was told to wait for a future update, or, as in one case, avoid using hardware acceleration.  For that kind of aggravation I might as well go elsewhere.  I have no tolerance for such things.  I know how engineering can/should work, and they weren't even trying from what I could see.  Jeter l’éponge, I voted with my money and my feet.

The timing worked out well, actually.  One of the hard drives toasted itself and I had a new one installed.  Being a several decades advocate for Linux I performed a clean Linux OS install, complete with all my favorite image processing software.  I was thrilled.  Everything ran visibly faster than it ever did on WinDoze.  I knew all this from my work years, but someone had convinced me that the better stuff was on WinDoze and SnApple.  I should never have listened to them.

OK.  So that's a long way of saying I don't like rent-ware.  Suffice it to say, I would be happy to find a workable Open Source image upsizing solution that performed anywhere near what the for pay packages do.  At which point I entered a rather long series of examinations looking for a process/tool combination that gave acceptable results.

If you're interested in the twisted, rocky, strange road I took, here it is.  I was convinced that image stacking would change the edges of contrast and color transition zones, and that this would be beneficial to upsizing/upscaling images in preparation for big, huge, enormous prints.

Sony A6300, Sigma 30mm f/2.8 EX DN, Single Image vs 5 Image Stack plus Gimp NoHalo upsize and varioius sharpeners.

Maybe Full Frame would be better?  Here's a Sony A7, Sony 35mm f/2.8 single Capture Sharpened and 5 image stack NoHalo upsize comparison all the way to 24,000 pixels on the long dimension

Then I re-ran the Sony A7 single image comparison using a different image sharpener.  And, again, this time using 5 image stacks.  All looking for a way to smooth the edges of angular hard contrast transition edges.

I slipped to the side when, in the Dark of Night, I got to wondering how good/bad a 50+ year old Nikon Nikkor-S 50mm f/1.4 was, and, while I'm at it, why not add a little rigor by using lens test charts?  Oh, and, gee, RawTherapee has a Lanczos upsize operator in it, too, doesn't it?  So let me try that as well.

All this was driving me nuts! as I juggled the matrix of possibilities to find as direct a path to success as I could.  Independent of my Dark of Night musings.

Remember what I wrote about the value of study and research?  At this point, the skies parted, the light shown through, and... what's that?  Argh.  It's a Paris winter.  But there's plenty of time to stumble around the "internets" when the light is low.

In my next post on Random Thoughts, I'll try to share what I found.  And because I'm at this point 9 years late, most of you likely already know these things and many others, but if on the off-chance you're not already familiar, stay tuned.

Chateau de Sceaux ~ 2023



Wednesday, January 03, 2024

Random Thoughts Photography [2] ~ Super-Resolution vs Noise

I recently read Keith Cooper's article on making a giant print from one of his 11mpixel Canon 1Ds files.  When the job was done, he said "I couldn’t resist asking a few bystanders what they thought, and the look on people’s faces when I told them it was from an 11MP camera made it all the more worthwhile."

What those early 11 and 12mpixel sensors had, in general, was low for their time noise.  Yes, you have to be careful about upsizing images for display/printing where 24mpixel to 100mpixel would be a lot easier to process.  There seems to be something still very relevant about knowing what the camera is capable of, being careful during image processing and to learning how various image processing tools effect the final output. 

Currently we the consumers are blessed with quiet sensors from Sony.  Nikon and Fuji use them.  Canon uses a Sony 1inch sensor in some of their Point and Shoot devices, too.  Noise is less of an issue than it once was, even just a decade ago.

In my various stumblings around looking at stuffs and things seeking as much color and resolution perfection as possible I've come to understand that starting from Base ISO 1 there is subtle noise.  There's a fancy name for it: Photo Response Non-Uniformity (PRNU).  This includes (if I grok it all correctly) the subtle variations in pre-Analog to Digital Converter (ADC) response to light.  This kind of pre-ADC noise can cancel through image stacking.

Jim Kasson wrote about the Sony A6300.  He says it takes about 64 images stacked and averaged to fully eliminate this form of noise.  When I look at his graphs I see that there is a 3x reduction in PRNU induced noise in a relatively simple 4 image stack.  I don't currently feel any need to stack 16 nor 65 images.

This on a practical level is exactly what I've experienced.    The sharp contrast transition zones (luminance or color, doesn't matter) "smooth" out with image stacking at native file resolutions as the noise is averaged out.  One might feel these are perhaps "softer" images than a single shot, but this is not at all the case.  Subtle noise adds "crunchiness" to an image that can be seen as "sharpness."

Taking all this into consideration, and in the Dead of Night (no, not the Dead Knight), a heretical thought came to me.  What if the "Super Resolution" handheld camera technique, with all the fun of aligning 4 to 20 images (depending on who you read), really comes down to noise reduction as the most important operator that helps make a super clean image, rather than "pixel shift"?

If Keith Cooper's 11mpixel giant print is any indication and outside of being an nerdy narrowly focused (har!) intellectual curiosity, I'm not quite sure how any of this matters. 

Chateau de Sceaux ~ 2023

Tuesday, January 02, 2024

Random Thoughts on Photography [1] ~ Steve McCurry and print size

Last year I visited the Steve McCurry exhibit at the Musee Maillol. Seeing his images in person was a moving, emotionally charged experience. 

I mightily appreciated his early Nikon FM Nikkor 105mm f/2.5 Kodachrome images. Printed to what looked like at least 20x30inches in size and they were glorious. It did not matter to me that the original image format was "small." The subjects with that particular camera/lens/film combination "worked." 

As we progressed through the show I could tell when M.McCurry went to digital. The prints were less saturated, perhaps due to the matte surface choice and/or color management decisions. But I also saw the overall sense of sharpness was still "there." Of course the subject matter continued to bring the impact of his earlier work. It was a matter of a style shift, more than anything else, the clued me into the transition. 

It was a little less apparent to see the transition from 12mpixel (Nikon D700) to 45mpixel (Nikon D850). I could tell, but most of the people around me didn't notice, nor did nor should they care.  There was zero pixelation in any of the digital prints.

I see that the D700 is now considered by some to be a "legend." There's nothing mystically magic about 12mpixels. The sensor does what it does and overall image quality has since improved. Progress in sensor technologies have seen to that to the point where current 61mpixel and 100mpixel dynamic range and noise match or better the original Nikon D700 12mpixel output.  Have a look at Photons to Photos to confirm this for yourself.

There was something interesting in the PetaPixel article.  Someone said that in working with a 12mpixel image "Printing is often used as an excuse for why someone needs a very high-resolution camera, but the reality is that — aside from very finely textured papers or those intended to be viewed from several inches away — resolution is far less critical for printing than many people suggest.

That really got me to thinking. Steve McCurry's show certainly proved that point. Hammered it home, in fact. The proof was right there for everyone to see in all those beautiful prints.

Bibliothèque nationale de France ~ 2023