Saturday, April 20, 2024

What to do in late 2023 into 2024

Here's a list of potentially interesting/fun things to do around l'isle de France.

Art/History/Automobile Photography Opportunities

1-12 May - Foire de Paris

11/12 May - Vintage Revival Montlhery 100 year celebration!

2-8 June - Rallye des Princesses 2024 (I won't be here for this)

23 June - Peking to Paris 2024 ends in Paris 

30 June - la traversee de Paris (a month earlier this year)

12-13 October - 100 year celebration Montlhery UTAC

Calendar of Montlhery events - 2024

 

-------------- DONE ---------------

21-27 April - Tour Auto 2024 a la porte de Versailles, Parc des Expositions 

14 April - rassemblement Vincennes en Anciennes old car show

14 Avril 2024 ~ Rassemblement Vincennes en Anciennes

17 March ~ Photo Foire ~ Chelles  ~ a GREAT show, but I didn't buy a thing

Chelles Audiovisuel Photo Foire ~ 17 March 2024

3 March ~ le carnival des femmes ~ depart pl. du Chatelet 14h30 (No Go)

24 February - 3 March ~ Salon International de l'Agriculture (No Go - way too many political issues)

11 February ~ Carnival de Paris ~ Promenade de Boeuf Gras ~ depart pl. Gambetta - No Go (wet, cold)

31 January - 4 February - Retromobile 2024  (Flickr

Ferrari ~ Retromobile ~ 2024

14 January - la traversee de Paris 2024 (Flickr

la traversee de Paris 2024

15 December 2023 ~ Noir et Blanc, BnF (Flickr

Bibliothèque nationale de France ~ 2023

6 December 2023 ~ Van Gogh - (Flickr)

1 December 2023 - Nicolas de Staël ~ City of Paris Museum of Modern Art (Flickr)

Musee d'Orsay ~ Paris 2021

30 November - 3 December ~ Salon des Vins des Vignerons Indépendants (Flickr

Salon des Vignerons Independents ~ 2023

November 2023 ~ Les halles Saint Pierre - Two shows (Flickr)

La Halle Saint-Pierre, Paris ~ 2023 

 

Musée international d'Art naïf Anatole Jakovsky

Ben Vautier

Thursday, April 11, 2024

Read the Readme, Dumb~Me...

Important Note to Self: Read the ReadMe file!!!

I've experienced a couple challenges recently when trying Fuji film simulations in RawTherapee. Things just weren't working out as expected.

For instance, when downloading cinema oriented Fuji-look-alike LUTs I learned to be very careful to find out if the LUT collection was made for S-Log input.  Why?  S-Log on the video side produces a very flat file for a very specific set of reasons that have nothing to do with stills photography.  And I'm not sure how to create an S-Log image starting from a stills RAW.

No matter how hard I tried, applying a S-Log LUT to a stills image seriously distorts the colors and contrast.  What drives me crazy is that none of the cine LUTs I tried come with a ReadMe file that might explain any of this.  Apparently I'm not one of the "cool kids" who can figure this out before downloading and attempting something.  So to make my life easier I've learned to avoid cine LUTs in general.

Further narrowing my search a little to camera profiles and LUTs developed for stills work I have belatedly learned that some of these files were specifically designed for use with "linear" camera profiles in RentWare.  The devil is in the details.  This is very important as some camera profiles and LUTs both add not only the color grading/film simulation, but they're also managing the initial tone curve as well.  

Once I understood that some LUTs require a "linear" camera profile starting point I was able to achieve the film simulation I was looking for.  It's correct to the point I doubt anyone would be able to tell which camera was used with the Fuji film simulation.  

When working in RawTherapee I create a "linear" camera profile by simply de-selecting "Tone Curve" in the Color Management module.  There is no need to load a specially made for RentWare "linear" camera profile into RawTherapee.

One more step is required.  To ensure the camera profiles color grading function does not influence the film simulation result I simply  de-select "Look Table" in the Color Management module as well.  

Here's a quick look at how de-selecting these two components in Color Management compare to the RawTherapee default color managed state -

 

Using Fuji film simulations in RawTherapee

 

Resources -

An excellent look at how Fuji film simulations modify colors - 

https://www.imaging-resource.com/news/2020/08/18/fujifilm-film-simulations-definitive-guide 

Individual Cluts can be downloaded off this site.  I've found that I can load a RAW image into RawTherapee, deselect "Tone Cuve" and "Look Table" in the Color Management module, and then apply these film simulations without further image manipulation (as in the above image).


Another Fuji HaldCLut package - 

https://blog.sowerby.me/fuji-film-simulation-profiles/

Note: When using these HaldCluts I've found I need to lighten the tones while keeping the Color Management "linear."  I do this by modifying "Exposure" or by sliding the top end of the tone curve to the left using "Luminosity."  Doing these things allows me to match the output of the above Clut collection.


Thursday, April 04, 2024

Summary ~ a comment I posted to pixls.us

I realize that something I posted to pixls.us works as a kind of Summary of Findings.  So I thought it might be interesting to share it here.

Retromobile ~ 2024

In addition to the many good comments, I’ll throw in my $0.02 worth. Keeping mind anything I say is worth the price you paid for it (ie: $0).

From looking at lenses for going on four decades to find a certain “magic”, here’s what I’ve learned.

    Until surprisingly recently, lens designers I’ve talked with felt that correcting for 7th order effects, while “do-able”, was a little over the top. People told me it was “unnecessary.” Modern optics can be corrected for 11th or, gasp, 13th order effects. There are a few interesting reasons to do this now.

    In general, vintage lenses typically were designed for either resolution or contrast. Modern optics can be found that strike an interesting balance between resolution and contrast (see previous paragraph). Which leads me to think the computing required to design lenses for 11th order effects was rather too great for rooms filled with human calculators (see Nikon’s 1000 and 1 Night series).

    In the vast majority of imaging systems I’ve looked at, resolution limitations are found in light sensitive materials (ie: film or digital sensor) and not in the lenses themselves when operating at their optimum aperture. OK. That’s a strong caveat, but someone sent me years ago a 75mm f/8 wide angle lens that covers at least 4x5 inches that is diffraction limited from wide open. So “softness” at wide apertures for some vintage optics is there because they’ve been designed this way.

    This is why I feel many vintage lens manufacturers designed their optics to a customer base (sort of). I’m thinking of old Nikon optics where they were designed for under-corrected spherical aberration behind the point of focus as well as providing a veiling spherical aberration wide open. This was, I’m convinced, deliberate to satisfy the Japanese market which valued a “delicacy” of rendition. Canon OTOH went the over way because of their customer base and over-corrected for spherical aberration behind the point of focus. Old Canon lenses can appear sharp wide open, but deliver nasty soap-bubble-ish background rendition as a result. Pentax, again in broad, designed their lenses to be more neutral.

    Modern Voighlander Heliar lenses bear little to zero rendition resemblance to lenses made for large format film in the early to mid 20th century. Which is to say, be careful of thinking naming conventions will render a scene similarly across the ages. Another example of what i mean is anything labeled “Sonnar.” How well a lens is corrected is more a function of careful design. Don’t believe me? Compare early Zeiss, mid-century Soviet, and the (justifiably) highly regarded 10.5cm/105mm Nikkor-P “Sonnar” designs. Out of focus rendition, chromatic aberration, flare, and astigmatism are treated widely differently depending on who designed the lens and is not something inherent in the basic optical layout. The Kingslake comments previously noted elsewhere in this thread about the Tessar formula being another excellent example. I’ve not encountered such a consistently horrid lens (and I’ve owned far too many of them) as the Zeiss 50mm f/3.5 or f/2.8 Tessar coming from the former Eastern Bloc. They blew it. It never gets “sharp”, really. f/8 seems the best it can do for an acceptable image.

    Modern optics can suffer from a surprising level of field distortion (barrel or pincushion). It appears to me that lens designers sometimes rely rather strongly on software to correct this kind of distortion since it makes it easier/cheaper to correct for chromatic aberration, astigmatism, and flare. In general, vintage lenses can be surprisingly “rectilinear” and I’ve not found it necessary to lean heavily on distortion corrections.

    Many vintage and most modern optics appear to offer pretty good field flatness. Zoom lenses can be another matter, particularly those designed for SLR and early DSLR. No, not all suffer from this, but I can pretty much find a weak spot in just about any zoom. There’s nearly always a “hole” somewhere in the zoom range, or so it seems.

    Even knowing all these things, trying to see an advantage of one thing over another can be difficult. I would enjoy buying a beer for the person who could sit down next to me and tell me which image was made with which lens. It’s impossible, of course. But because we’re on the “inside” and it matters to us, we often place a LOT of emphasis on the lenses we choose. I can’t tell the difference between images made with a new Sigma 24mm f/3.5 DG DN and an early '80’s Nikon Nikkor 24mm f/2.8 Ai. I could say similar things about just any of the lenses I own, vintage or modern.

So after all these years and all this thrashing and whinging and wrangling where did I find the “magic” I was looking for? I found it in careful image processing. This means tightly controlled color management, color grading, sharpness and local contrast controls, etc, etc, etc. This means being clear with myself on what I seek in and how to express a subject/scene in the final result.

OK. Enough of that. There’s much more I could say, but why? I’ve already said too much.

Tuesday, March 05, 2024

Capture Sharpen ~ Sony 55mm f/1.8, Nikon Nikkor 105mm f/2.5, Nikon Nikkor 135mm f/3.5 with 1.4x Sigma AF teleconverter

In a previous post I noted that a couple of Nikkors mated with a Sigma 1.4x AF teleconverter were softer than the nature lens sans Sigma.  This was conventional wisdom back in the day and is easily confirmed using modern cameras.

There are tools in this Digital Age, however, that were not available to us Old Dinosaurs of the Film Age.  Sone of them are image sharpeners.  In particular, something called "Capture Sharpen."  It is available in Adobe rentware as well as the Open Source RawTherapee and is used as a pre-sharpener before using something like "UnSharp Mask" or other sharpeners (of which there are many).

So thinking a little about the softening effects of the Sigma 1.4x teleconverter, I wondered what would happen if I "Capture Sharpened" the Nikkor images?  Just to see how it compared, I thought it might be interesting to see how the Sony Zeiss FE 55mm f/1.8 "Capture Sharpened", too.  And shoot everything at the soft-ish wide open aperture.

Here's what I found.

 

Capture Sharpen ~ Sony 55mm, Nikkor 105mm, Nikkor 135mm

 

The "Capture Sharpened" Nikkor/Sigma images are "sharper" than the un-sharpened Zony.  The Zony "Capture Sharpened" is, well, over the top amazing.  Wide open.

Voila the Miracle of Modern Image Processing tools.

Friday, March 01, 2024

Nikon Nikkor 135mm f/2.8 AiS, Nikon Nikkor 105mm f/2.5 K, Sony 55mm f/1.8 ZA, Sigma 1.4x teleconverter ~ Considering Old Lens Magic

Roundy-round I go, yet again, one more time, perhaps with a bit more feeling, and a bit more bravado.

I can't remember how many Nikkor 135mm lenses I've bought and sold over the years.  Usually I'll pick something up for a project, finish the project, then sell the lens after it's sat in the Toy Closet for what I consider too long.  Mainly as a source of funding for Yet Another Project.

Some time back there was Yet Another Project that came up and while I had some wonderful lenses at the time that might fit most of the purpose, there was a gap in my focal length lineup.  Indeed, you probably could guess the missing focal length.  That's right.  135mm.

Browsing the local on-line ads I contacted several people and considered several different lenses.  Should I go really cheap and spend around 25Euro?  Most of the lenses in this price class were either 3rd party (Vivitar, Soligor, etc) or had some kind of fault, like fungus or non-operable apertures. Or should I go a little higher end and spend over 100Euro?  Zeiss and Nikon and Leitz were typically the ones people felt were made of gold and set their asking prices accordingly.

The items I could view depended on my search criteria.  So I typically try all manner of combinations just to see what pops up.  This is how I found a lens located just down the street from me for a nicely small-ish well less than 100Euro price.

Once home I checked the serial number of the lens to see when it was manufactured.  I thought I was looking at a Nikon Nikkor 135mm f/2.8 Ai.  But this was wrong.  It's actually a late model AiS version c.2003.

The focus systems changed between the Ai and AiS Nikkor series.  The AiS lenses focus more quickly with shorter throws than the Ai.  In this case I've read that the Ai 135mm is around 270degrees stop to stop, where the AiS is around 170degrees.  My concern with the AiS was that focusing action would be too quick and that I could easily mis-focus.  If that is the only downside, I'd have to learn to be careful with focusing during the project that I had in mind.

I'd also read various comments around the 'net where folks didn't like their 135mm Nikkors and preferred the 105mm f/2.5.  Most of the comments were about how the 135mm was less sharp than the 105mm.  Since I had both focal lengths I thought it might be interesting to find out how my optics compared.

To set a baseline I used a Sony FE 55mm f/1.8 ZA (Zeiss) as a reference.  Then I remembered that I have a Sigma 1.4x AF teleconverter and wondered how that might effect performance of the Nikkors.  I figured I'd add the teleconverter to the comparison since I hauled it out of the Toy Box while scrounging for something else.  The 105 and 135 are short enough in physical length that maybe I could use them in the field with a bit more reach that the Sigma 1.4x would provide.

Here's what I found.  Remember to click on the following image and enlarge to 100% to see whatever there is to see.

 

Comparison ~ Nikkor 105mm, Nikkor 135mm, Sony 55mm ZA

 

Comments - 

Well... well... well...  Can I tell any meaningful difference between Zony 55mm and Nikkors?  Nope.  

Are there any differences between the Nikkors themselves?  Nope.  

I suppose, in terms of "sharpness" the Nikkors feel "fatter" in image rendition than the Zony, but how the heck do I measure something like that?  

OK.  Maybe the Nikkors are a 1/64th of a step behind the Zony.  Maybe.  But probably not.  They're all equal.  Really.

What about when I add a Sigma 1.4x AF teleconverter into the Nikkor mix?  Contrast is lower and there's a slight loss of resolution in the center of the field.  The corners look awful.

Just to check as much as I could check and to cross as many bridges as I could, I re-ran the comparison using the 135mm f/2.8 with the Sigma 1.4x, focusing first in the center of the field, taking a photo, then focusing at the outer edges, and taking another photo.  

The results speak for themselves.  The teleconverter appears to introduce field curvature.  Since I can't measure how badly the field is curved, I'm not yet sure how it will impact image-making in the field.  What I know is to avoid photographing flat subjects using this combination.  But it might be just fine for photographing motorcycles at speed on a racetrack where the edges of the scene count for nothing but blur and color.

In the end I really can't tell much difference in terms of "sharpness" between the Zony and Nikkor lenses.  I'm convinced that any differences would come down to my camera-craft and abilities to control my camera-work. Putting a stake in the heart of "sharpness" concerns and in the case of the two lenses I own the AiS 135mm f/2.8  is every bit the equal to the legendary 105mm Xenotar Nikkor.

Friday, February 16, 2024

True Focal Length ~ Sony 35mm f/2.8 ZA, Sigma 24mm f/3.5 DG DN, Tamron 20mm f/2.8 Di III

Casually reading something on the internet, I stumbled across a comment about how the field of view narrows when applying lens corrections to the Tamron 20mm f/2.8 Di III.  The writer claimed the lens went from 20mm uncorrected to 22mm corrected.

 

Bugatti ~ Retromobile ~ 2024

Image taken using at
Tamron 20mm f/2.8 Di III

 

I instantly thought that a gap from 24mm to 22mm might not be worth the effort to carry two lenses, the Tamron 20mm and a Sigma 24mm f/3.5 DG DN.

I've caught myself out in the past in reading something, not fully understanding the situation, selling a lens or camera, only to find out later that I was wrong and that the "problem" I was trying to solve was actually something completely different.

Case in point: There was a wonderfully sharp little Sigma 60mm f/2.8 DN EX E that suddenly developed a problem focusing.  I read on the internet that the AF mechanism was not up to snuff, so I sold the optic.  Only to find out a few weeks later that there was some grease stuck around the AF circuit on the shutter release of my Sony A6000.  I cleaned the A6000 and instantly everything was good again and the AF worked correctly.  All was not lost, however.  I picked up a brand new Sony 50mm f/1.8 SEL OSS that has proven to be great jewel-like little lens.  Still, I learned something from the experience.

Thinking that the Tamron corrected and Sigma lenses were too close together in focal length, my first impulse was to sell the 20mm Tamron and buy a Sigma 17mm f/4 DG DN or a Sigma 20mm f/2 DG DN.

But hold the horses.  Maybe I should check the actual focal lengths of these and then make a better informed decision.

Taking a ruler, I measured 1100mm from the tripod mounted camera to a spot on the floor where I stretched the ruler parallel to the film plane of the camera.  Then I took a photo using three lenses, the Tamron 20mm f/2.8 Di III, the Sigma 24mm f/3.5 DG DN, and the Sony 35mm f/2.8 ZA.  

I then noted the numbers on the ruler in the horizontal direction at the very edges of the scene uncorrected and then corrected, calculated the base of the isosceles triangle, calculated the angle at the film plane of the camera, then calculated the actual focal length of the lens at 1100mm distance from the subject.  Easy peasey.  Right.  Here's what I found.

Tamron 20mm f/2.8 Di III uncorrected - 18mm

Tamron 20mm f/2.8 Di III corrected - 21mm

Sigma 24mm f/3.5 DG DN uncorrected - 27mm

Sigma 24mm f/3.5 DG DN corrected - 27mm

Sony 35mm f/2.8 ZA uncorrected - 39mm

Sony 35mm f/2.8 ZA corrected - 39mm

Well, lookee there, will ya?  

The Sigma and Sony lenses have longer focal lengths than marked when focused on a subject 1100mm away.  I would have expected that with an old manual focus lenses were the entire lens group moved forward when focused on close subjects.  Extending the lens distance to the image plane increases focal length.  But since the Sony and Sigma are internally focusing lenses, I thought there might be a lot less of what cinematographers call "focus breathing."  That's where the size of a subject changes with changes in focus.

OK.  So I learned that the Sigma and Sony lenses are longer than expected.  

What about the Tamron?  It starts wider than marked on the front of the lens when uncorrected.  The lens suffers from a large amount of barrel distortion and I can see quite a shift around the edges when a correction is applied.  Once applied, the lens measures 21mm.  This is much closer to what is marked on the lens than either the Sigma or the Sony.

A gap from 21mm to 27mm between two lenses is actually quite enough for me.  I don't feel the need to find something different.  Buying and selling lenses is becoming a chore and is something of a thrash, so I'll stick with what I have for now.  I'm glad I checked.

Saturday, February 10, 2024

RawTherapee dcp and lcp files...

I need to remember something detailed.

 

Retromobile ~ 2024

 

In using RawTherapee to process RAW files I've found it has sophisticated color and lens management systems.

Interestingly, RawTherapee accepts industry standard dcp and lcp files.  dcp files are for color management and lcp are for lens corrections.

Rawtherapee comes with some dcp and lcp files on installation which appear to be updated from time to time.  I'm not sure who generates these files, but they seem to have done a good job for the lenses and camera models covered by the software distribution.

However, I found some of my cameras (Sony NEX, A6000, A5000, A7) and lenses (Sigma 24mm and a few others) are not supported by RawTherapee automation for the version I'm running (the latest), so I set off in search of good dcp and lcp files to fill in the gaps.

It turns out a certain RentWare implements these two file formats for their own color and lens management systems.  I wondered if I might be able to borrow them?  I run Linux, but have a computer that can boot into Windoze.  Here's what I do.

Boot into Windoze and...

  • In a browser search for "Adobe Camera Raw download" and locate the Adobe site (there are other sites that may offer downloads, but they are highly suspicious and I avoid them like the plague)
  • Download the latest "Camera Raw" plugin (I do not want LR or PS, just the RAW converter part)
  • Execute the "...exe" file to unpack the plugin
  • Descend the "c:\ProgramData\Adobe..." folder structure to locate "CameraProfiles" and "LensProfiles"
  • Copy the contents of these two folder structures into a Linux readable location/media

Boot into Linux and... 

  • Copy the "CameraProfiles" and "LensProfiles" directories and their contents somewhere under $HOME where I can easily find them
  • Open RawTherapee
  • Open an image and...
  • Under Color Management 
    • Select "custom"
    • Open the directory box
    • Locate the "camera profiles" directory
    • Descend the directory to...
    • Locate the camera model
  • Under "Profile Lens Correction" 
    • Select "LCP file"
    • Open the directory box
    • Locate the "lens profiles" directory
    • Descend the directory to...
    • Locate the right lens
  •  The base image is now configured using good dcp and lcp configurations.

Under "Lens corrections" there are selections for "Geometric distortion", "Vignetting", and "Chromatic aberration."  I turn off "Vignetting" because I've found that correction to be too strong for my taste.  But I do turn on "Geometric distortion" and "Chromatic aberration."

Under "Color management" there are selections for "Tone Curve", "Base Table", "Look Table", and "Baseline Exposure."  These are defined here.

On a practical level here's what I do.

  • Open an image in Rawtherapee
  • Let Rawtherapee select the demosaic algorithm (long topic for another time)
  • Set the lens profile
  • Select the color management dcp to be used (more on this in a moment)
  • Select "Tone Curve"
  • Select "Base Table" if selectable (this is not always implemented in some of the dcp files I've seen)
  • Select "Look Table" to get the dcp files color grading (which can be glorious, BTW)
  • Unselect "Baseline Exposure" since there is no jpg reference (read the definitions linked to above)

If I want to make further changes to "Curves", I go to the...

  • Eexposure panel
  • Find the curve function
  • Select "Luminance" 
  • _Then_ make adjustments to the curve
This keeps the colors from shifting.  Remembering, of course, that standard curves modify RGB curves at the same time the luminance curve is changed.  This modifies the color of the image, which I find I do not want to happen since  I like the "Look Table" results and do not want them to change.

From the list of practical things that I do I said I would comment further on selecting a <specific camera model> dcp.  The RentWare distribution is a little complex in how they've implemented their dcp directory structure.  Basically, it comes down to this.  Under "CameraProfiles" we have two ways of further descending the directory structure.

  • "...CameraProfiles/AdobeStandard/<specific camera model>.dcp"
  • "...CameraProfiles/Camera/<specific camera model>/<several dcp to choose from>

I'll start with the ".../Camera/<specific camera model>..." profiles.   From what I can tell these are the RentWares attempt to match specific image style selections offered by the manufacturer.  For instance, with the Sony A7 there are vivid, neutral, standard, landscape, and other dcp selections found in this directory.  If I want an image to look similar to the in-camera style selection, this is a good place to start.

Looking in AdobeStandard/... directory I see the RentWare has offered something a little different.  This is appears to be their own interpretation of what a "good image" would start with.  I find in the case of the Sony A7 that the ".../AdobeStandard/Sony ILCE-7 AdobeStandard.dcp"offers a more muted yellow, for instance, color starting point than the Camera/<specific camera model>/... "standard" dcp.  It pays to experiment and experience these various dcp options.

You notice that I've said nothing about using the RawTherapee "Processing Profiles."  This is because I've found the automated selections to be too strong for my quickly evolving image processing tastes.

OK.  There it is.  Lots and lots of detail.  But if I save a base processing configuration, the processing workflow can collapse to a single button push.  It's pretty sophisticated stuff, but I'm learning it's well worth the while to understand what's going on.

Thursday, February 08, 2024

Sony FE 55mm f/1.8 ZA, Nikon Nikkor-P 105mm f/2.5 ~ considering software magic

In a prior blog entry I commented on how amazing image making is in light of technologies, materials, and manufacturing.  I would like to continue along these lines and add something we never had in the old film/chemistry days.

Software.

To me this is as amazing as the physical implementation of sensors and cameras and lenses.  In software we can implement just about any standard or process or idea we want.

There are algorithms for many aspects of image processing that directly effect how we view a photograph.  Exposure, contrast, color profile, color depth, color management, image processing spaces (up to 32bit floating point!), tone mapping, film emulations, lens corrections, image alignment, perspective corrections, image stitching, high dynamic range, and... and... and... there are so many things implemented in software in various ways that it can be mind boggling.

It can be easy to be confused or frustrated and overwhelmed by all the options.  What to they mean?  What do they do?  How do I take advantage of this selection compared to that one over there?

Considering one example from Rawtherapee, there are 19(!!!) options for demosaicing a RAW file on import into this software.  Yes, certain selections can be made for users if you let the software automation retain control, but users are also free to select something else.  Each and every option has a reason to exist.  Understanding them can take time, certainly, but what a rich set of possibilities there are in just this one single step.

Looking at another example, there are at least 15(!!!) sharpen operators in the Open Source G'Mic.  15 different direct ways of sharpening an image.  Each one is implemented based on a formula/algorithm that attempts to enhance some condition or another.  If we add high by-pass filtering in a layer, there are even more image sharpening options, and I've not counted the sharpening operators in the Gimp (where I run G'Mic).

Taking this yet another step further, mixing various operators can refine an image processing sequence.  In the case of the subject matter I currently enjoy photographing (automobiles and motorcycles) and taking into consideration digital enlargements (ie: going from 6000x4000pixels up to, say, 12,000x8000pixels) I've found the following sequence to give generally outstanding results.

  • Open TIF RawTherapee output in the Gimp
  • G'Mic DCCI2x upsize the image
  • G'Mic Inverse Diffusion sharpen using just enough iterations to avoid introducing visible artifacts
  • G'Mic Octave sharpen the image

In this case I use _two_ image sharpeners.  One avoids pixelation (Inverse Diffusion) and the other gently sharpens subject edges.

Sometimes Octave is too strong.  In which case I use one of two sharpen operations. One is applying a high by-pass filter layer blended using "Soft Light" over the base DCCI2x'd Inverse Diffusion sharpened image.  This is brilliant for keeping pixelation to a minimum.

If that approach doesn't work and I need something stronger I use...

  • Create a new G'Mic Gradient Norm layer from the base image
  • Duplicate the base DCCI2x'd Inverse Diffusion sharpened image as another layer
  • Copy the Gradient Norm layer into the mask of the duplicated base image
  • Apply a hard G'Mic sharpener, such as Richardson Lucy
  • Duplicate the sharpened layer to add further sharpness to the over all image

You have to see any and all of this to believe it.  Which is another point I'd like to make.  When using various sharpeners, I've found it really helps to watch the action of a sharpener at full pixel-peeping resolution.

Returning to the very start of the image processing pipeline, there is a sharpening operator in RawTherapee that I've found to be very useful.  It's called "Capture Sharpen."  It's best applied to low-ISO images where noise is at a minimum and is used to "correct" the effects of AA filters or image smearing due to slightly missed focus or slight camera movement.

The effect of Capture Sharpen can be dramatic and yields an image that I feel avoids any effects of being "over sharpened." 

In the case of my comparison of a Sony FE 55mm f/1.8 with a c.1973 Nikon Nikkor-P (Xenar) 105mm f/2.5 pre-Ai, I thought I'd confirm these effects of Capture Sharpen.  Here's what I found.

 

Sony 55mm f/1.8, Nikkor 105mm f/2.5 ~ Capture Sharpen

 

A comment about Capture Sharpen: This operator is used in certain RentWare, too, and sets the basis for their digital enlargement process in something I think they call "Smart Sharpen" and "Super Resolution".  We can do the very same thing in Open Source Software.  I described three ways to perform digital enlargements earlier in this post.

From this little comparison I see that Capture Sharpen applied to the Nikkor-P image brings the crispness/sharpness/contrast well into line with the native un-sharpened Sony FE 55mm.  I can take an old lens and turn it into a digital-era Sharpness Monster.  Applying Capture Sharpen to the Sony FE images takes images to another amazing level of sharpness.  

How cool is all this?  And this is just one little step in the image processing pipeline.  

Software.  I love it!  Oh the possibilities...

Wednesday, February 07, 2024

Sony FE 55mm f/1.8 ZA, Nikon Nikkor-P 105mm f/2.5 ~ considering the magic

Sometimes when I'm comparing this and that and poking here and looking there at photography, equipment, and processing I stand back and marvel at the magnitude of science and engineering that go into making any of this possible.  I've written this before but I feel the need to underscore this point.  It's "magical" on some level.  Consider what knowledge, which materials, what kind of manufacturing it takes to make each step in this process.

  • Light hits a light sensitive diode
  • Diode emits a tiny electrical signal
  • Hardware applies gain to amplify the signal (ISO sets the amplification level)
  • Amplified signal enters an analog to digital converter (ADC)
  • Digitized information exits the ADC (may receive additional signal processing as in the cases of extended ISO settings)
  • Digital information can now be written into a RAW file
  • This RAW file can then be loaded into software on a computer/tablette/mobile-phone
  • De-Mosaic function performed on a RAW file
    • Individual "pixels" of RGGB (red, 2x green, blue) information is processed using surrounding "pixel" data to calculate color and luminosity 
    • Such that there is now a different kind of information (de-mosaic calculated) for each "pixel"
    • Changing each and every original red, 2x green and blue locations into full color with brightness information
  • De-Moscaic'd images still look ghastly as the colors are still "off", so...
  • Software "massages" a de-mosaic'd image using ICC, DCP, or other "camera profile" filters
  • The image is now displayed to the user
  • User can now begin performing additional modifications using the tools the software provides

After all this we can now consider various elements of the image making system.  In my case I've looked at lenses.  To me it's absolutely amazing that after passing through the process described above that I can now wonder if one lens is "sharper" than another.  I'm completely and utterly reliant on all that knowledge, understanding, engineering, manufacturing, materials, and software just to reach this potentially interesting point.

Whew!  It sometimes takes my breath away.

With this in mind I wanted to see how the modern Sony FE 55mm f/1.8 ZA compared with an old c.1973 Nikon Nikkor-P (Xenotar) 105mm f/2.5 pre-Ai lens.  Here's what I found.

 

Sony 55mm f/1.8, Nikkor 105mm f/2.5 ~ Comparison

 

It's easy for me to see the newer Sony FE is sharper than the Nikkor-P.  Though I have to admit the old Nikkor-P still looks pretty good.  The differences between the two lenses come down to the Sony FE being the contrastier optic when I use the exact same image processing pipeline applied to both lenses.  

This leaves room for choosing a different process pipeline. Software is flexible in ways I never experienced in the old film/chemistry days when the Nikkor-P was first introduced.  I will consider the role of software sharpening in image processing in the next entry on this topic.
 

Thursday, February 01, 2024

Then and Now ~ part two

Thinking a bit further about the size of the gear we used to have compared to what we have today, I remembered there were some wonderfully compact interchangeable lens cameras.  One in particular attracted my attention, though I never could afford it, even on the used market, and that was the Minolta CL and CLE.

So I thought I'd have a quick look at the lenses and camera body for that small Minolta and compare them to Sony APS-C.  The thing that attracts me, even now, to Sony APS-C is that this series remains smaller than Fuji, Canon, and Nikon devices of equivalent sensor size.  

I noticed years ago just how big Fuji, in particular, is.  Check around and you'll see that Fuji with the small sensor is as big as a Sony full frame.  Makes me wonder what's taking up all that space in those cameras.  Canon makes me wonder the same thing.

Continuing with compactness, it's easy to see just how light and small the Minoltas were.  Then I remembered another camera I never owned and that is the Contax G1.  It's rather modern and the lenses are AF.  It could be interesting to compare AF film lenses with digital.  For this I choose a couple Sigma EX DN and a Sony OSS.

Compact Cameras Weight Length Height Depth
Contax G1 450grams 133mm 77mm 35mm
Minolta CLE 380grams 124mm 77.5mm 32mm
Sony NEX-5T 276grams 111mm 59mm 39mm
Sony A6300 404grams 120mm 67mm 49mm





Lenses



Contax



Biogon 21mm f/2.8 200grams 35mm 59mm
Planar 35mm f/2 160grams 29mm 56mm
Planar 45mm f/2 190grams 39mm 56mm
Zeiss Sonnar 90mm f/2.8 290grams 54mm 59mm





Minolta



28mm f/2.8 135grams 35mm 51mm
40mm f/2 105grams 24.5mm 51mm
90mm f/4 250grams 60mm 51mm





Sony APS-C



Sigma 19mm f/2.8 EX DN 140grams 46mm 61mm
Sigma 30mm f/2.8 EX DN 135grams 39mm 61mm
Sony 50mm f/1.8 OSS 202grams 62mm 62mm

Just as with full frame, the big changes from then to now are the increased capabilities of digital over film.  And in terms of compactness I'm happy to see certain of today's APS-C devices aren't much different than back when we used film.  I wish Sony still offered the NEX series APS-C.  The versions without EVS are truly compact and light.

Onward.

 

Retromobile ~ 2024

Sunday, January 28, 2024

Then and Now ...

One evening while casually browsing the internets and stumbling on someone waxing poetic about film cameras, rangefinders, and the "Leica esthetic" I remembered the two M3 I owned back in the early '80's.  One had an old 50mm f/1.5 and the other a 35mm f/3.5 (if memory serves).  I wanted to see if I could find that "Leica esthetic" and, relatedly, that "Leica look" for myself.

I was always looking for the "magic."  Was it in the cameras?  Was it in the lenses?  This is why I tried so many different systems.  There was a a Canon F1 (original) with four or five lenses, and a little later I had a Pentax MX and after that a Nikon FM system.  

la traversee de Paris 2024

As for the "Leica look" and "Leica esthetic", other than the interesting rendering of the Leica 50mm f/1.5 when shot wide open I found my SLR images of the day to be very similar to the rangefinder.  It took me many decades to realize the "magic" was the nut behind the viewfinder.

Which all lead to another late evening musing.  I wondered how the sizes and weights of film camera equipment 40 years ago might compare to digital.  So I did a quick spreadsheet of sizes and weights.  Just because.  Late evening musing. Right. Here it is.


Cameras Weight Length Height Depth
Canon F1 820grams 147mm 99mm 43mm
Leica M6 575grams 137mm 77mm 40mm
Sony A7 474grams 127mm 94mm 48mm





Lenses Weight Length Diameter
Canon FD



Canon 20mm f/2.8 FD SSC 345grams 75mm 58mm
Canon 24mm f/2.8 FD SSC 330grams 66mm 53mm
Canon 35mm f/3.5 FD SSC 325grams 64mm 49mm
Canon 50mm f/1.8 FD SSC 200grams 63mm 39mm
Canon 85mm f/1.8 FD SSC 425mm 67mm 57mm





Leica



Leica Super-Elmar 18 f/3.8 309grams 49mm 61mm
Leica 24mm f/3.4 Elmar 260grams 40mm 56mm
Leica Summarit 35mm f/2.5 220grams 43mm 51mm
Zeiss 35mm f/2.8 178grams 30mm 51mm
Leica 50mm f/2 Summicron 242grams 44mm 53mm
Leica 90mm f/2 Summicron 635grams 102mm 66mm
Leica 90mm f/2.8 Elmarit 395grams 76mm 55mm





Sony



Tamron 20mm f/2.8 220grams 64mm 73mm
Sigma 24mm f/3.5 DG DN 225grams 64mm 51mm
Sony 35mm f/2.8 ZA 120grams 37mm 62mm
Sony 55mm f/1.8 ZA 281grams 71mm 64mm
Sony 85mm f/1.8 371grams 78mm 82mm

Comparing the old film camera dimensions to a 10 year old Sony A7 full frame device quickly shows how shapes have evolved. The Canon F1 was a rather hefty device, even back in the day.  The Leica M-series was built rather like a brick (or so it seemed to me).  If one preferred their SLR to be light and compact, Pentax made those wonderful M-series cameras and Olympus offered the OM-1.

Leica lens dimensions remain comparatively small, with the Sony digital lenses I choose being nearly as compact.

Comparing lens weights shows a couple things.  Digital lenses can weigh less than old rangefinder optics.  SLR lenses are monsters compared with these two, but we already knew that, right? Similarly, if one wanted to save size and weight while sticking to SLR bodies, Pentax and Olympus both offered some interesting things.  I imagine there's a reason why lenses from Pentax and Olympus remain popular among the "focus peaking" crowd.

The obvious thing that has evolved over the years is capability and flexibility.  Film was a one-trick pony.  The ASA and color/monochrome selections were made when the film that was loaded into a camera.  Lenses were strictly manual focus.  How we used these old systems necessitated anticipating the needs of the situations we found ourselves in.

Come to think of it, that was really good training for moving into digital.  Anticipation and planning can be helpful.  It forces me to think through a situation in advance, rather than walking into a scenario flat-footed and having to react.  Reacting leaves me all fumble-fingered and confused.

Wrapping up my late evening musings, it's absolutely remarkable how much technology we now have available to us, even as the size and weight of things have pretty much remained constant.

Tuesday, January 23, 2024

Sony A7 (original) tethered to Linux...

I need to document how I tethered a Sony A7 (original) to my Linux laptop.  It's too easy to forget the details since I don't do this often enough for it to stick in my brain.

Process ~

  1. Install "Entangle" (the software manager for my Linux installation was able to find this quickly and install it easily)
  2. Turn on Sony A7 and menu dive to "USB Connection" and set to "PC Remote"
  3. Connect Sony A7 to the Linux computer using an appropriate USB cable (it's the same one I use to transfer files off the SD card in the camera onto the Linux device)
  4. Start Entangle and verify the camera is connected
  5. Everything should be intuitive afterward.

Details - 

  • Entangle was set to delete images in the camera after they were downloaded. Entangle Settings allowed me to leave the images in both places, computer and camera.
  • Entangle downloads images are saved to home/<login_name>/Pictures/Capture  I need to look at how to change the destination directory.
  • I haven't yet figured out how to get the Live View image to display in Entangle.  Until then, I need to use the LCD or viewfinder to frame the scene, then use Entangle to manage the camera settings and shutter release.

I think there's a way of connecting the output (saved images) from Entangle to Darktable.  Since I don't use Darktable I don't know the details.  I'm still looking at RawTherapee to see if I can connect it to Entangle.  Though it's not really a problem as long as I have RawTherapee up and running I can see the Entangle downloaded images and edit them from the "Capture" directory.

 

la traversee de Paris 2024

Upsizing ~ a recipe using the Gimp and G'Mic

Yesterday I sent a 100mpixel image to a friend to have them inspect it closely and to pixel-peep to their hearts content.  

 

la traversee de Paris 2024

 

The photo started as a 24mpixel Sony A7 image of a motorcycle.  It's a rare beast, the motorcycle, and I used a fine 20mm lens on the tripod mounted A7.  I'm imagining just how glorious a large print might look in going from the native 6000x4000 pixel file size to 12000x8000.

I shared the image with this particular friend because they shoot Fuji GFX 100mpixel cameras.  If there is something amiss, they would spot it.

Their verdict? 

"The detail is just mind blowing really ! ! ! !... no arguments on this end... "

An hour later a pretty little Sony A7R came up on one of my favorite shopping sites for a rather decent price.  Should I get it? was the question of the evening.  Based on my friends reaction I think it's pretty clear the answer would be no if I was thinking in terms of "improved" resolution over what I already own.

Here's the recipe I used -

  1. Open the image in RawTherapee, Capture Sharpen and process the 24mpixel image to taste
  2. Open the image from step #1 in the Gimp and continue with...
  3. G'Mic DCCI2x upsize (found under the "Repair" tab)
  4. G'Mic Inverse Diffusion sharpen set to between 5 and 10 iterations (found under the "Details" tab)
  5. G'Mic High ByPass filter applied to a layer copy of the image from step #4 (also found under the "Details" tab)
  6. Set the High ByPass filtered layer blend mode to "Soft Light"
  7. Flatten and save the result

A couple comments about the upsizing choices I made - G'Mic DCCI2x does a very good job avoiding pixelation commonly seen on diagonal hard/sharp edges with other upsize operators.  Similarly, I've found that Inverse Diffusion does not over-emphasize pixelation, even when using up to 10 iterations, un-like USM, or Octave or Richardson-Lucy.  Finally, for the continuing avoidance of pixelation reasoning, applying a High ByPass filter layer that is blended in "Soft Light" seems to do the trick.

Occasionally I've found applying a light Noise Reduction in step #1 with RawTherapee and again after step #3 in the Gimp can help keep colors "true" along contrasty, sharp edges.  For the motorcycle image I didn't need to apply any NR.

As always, if something isn't clear, let me know and I'll try to do a better job explaining. I see that the first edit of this entry wasn't clear, so it's been updated to correct my mistakes.

Thursday, January 18, 2024

Color Management ~ a recipe for RawTherapee

After having slogged through all that color management and "color science" madness I've settled on a simple, pleasing color processing recipe.

I use Open Source tools (RawTherapee and the Gimp) running on MintOS Linux, so the following might not directly apply to proprietary software nor RentWare processing applications running on Apple or Microsoft systems.  

Also, as I refer to the dcp color management file, many places refer to this as a "camera profile."  I'm just trying to be specific so that I don't confuse ICC-based color management systems with dcp-based software.

In RawTherapee -

  1. Open RAW file
  2. Let RawTherapee demosaic the image (which is will do for you before displaying an image)
  3. Apply lens corrections
  4. Capture Sharpen, if the ISO is low enough, otherwise apply an appropriate level of Noise Reduction
  5. Apply a camera-specific dcp in Color Management
    1. Select: "Tone Curve" (this one is particularly important for the rest of the process)
    2. Select: "Look Table"
    3. De-select: "Base Exposure" (this will make fewer Exposure modifications possible in the following step...)
  6. Exposure slider to set the overall scene brightness (being careful to avoid clipping any of the channels)
  7. Gentle adjustments of the Luminosity channel to fine tune contrast and brightness (again, being careful to avoid clipping any of the channels)

I'm not yet clear on if steps 5 and 6 should exchange places (see Update below).  It might be the case that stretching and changing the color space (exposure, brightness, contrast) changes how the dcp color map onto an image.  This is something I'm still considering, though I've not yet been able to confirm any difference where color management is applied (as step 5, or moved down as step 6 after the exposure is set).

For the way I've written the recipe, enabling "Tone Curve" in Color Management and disabling "Base Exposure" does the heavy lifting of setting a decent starting point for exposure.  The curve is embedded in some (many?) dcp files.  Of course, if you don't like the look of it you can disable it in Color Management and use the Luminosity channel exclusively.  De-selecting "Base Exposure" gets around RawTherapee's inability to match in-camera jpg tones.

Using "Standard", "Film Like", or "Perception" curves can modify colors are brightness and contrast are changed.  This is why "Luminosity" is so useful.  It changes brightness and leaves the dcp color alone.

If you don't already have a decent dcp, this person has a downloadable zip for RawTherapee that comes from the big RentWare provider.  I think these are beautiful and are much better than any of the dcp files I created.

 [Day +1 Update: I measured the colors in steps 5 and 6 and remeasured them when swapping the sequence of these steps.  There's a small difference in color.  So what I've taken to doing is running the process as written, then going back after steps 6 and 7 (when I'm done processing for exposure and contrast) to re-run step 5.  I do that by selecting "Camera Standard" and then selecting "Custom" where the dcp files are that I use.  This sets the colors to the processing levels I intend and avoids the small/minor color shifts that are introduced as the Luminosity channel Exposure and Contrast are manipulated.]


la traversee de Paris 2024

Wednesday, January 17, 2024

Random Thoughts on Photography [10] ~ Getting to "linear"

A friend emailed me recently and asked if I knew about linear profiles?

I had to stop a moment and think about it, because I'd not heard it referred to in this way.  

People are working to get to linear curves into certain RentWare.  Remember my comments on "closed systems"?  There are long discussions on the topic of "linearity" in image processing.

If I understand correctly, the re-linearization of RentWare is through the application of a "profile."  The goal being to retain as much highlight tonal separation as possible.  It appears that someone realizes how RentWare strongly raises highlight tones and wants to "re-correct" that.  Raising highlights to make an image look "good" flattens tonal separation.  For example, in landscape photography, think clouds.

Further, if linear profiles are actually camera profiles applied at the initial color management stage, then these must be working to counteract the effects of the software.  Some RentWare uses dcp format files which can have a "curve" built-in.  Perhaps in those cases the camera profile does not specify a "curve."  As you can see, most of this is wild speculation on my part.

In my little Linux/RawTherapee/Gimp world, struggles most often revolve around learning and understanding as there are often fewer and different limits placed on the user.  Generally, I can figure out how things are done in proprietary software and RentWare by studying various on-line guides and such.  So I set out to try and understand this problem of linearity in RentWare.

One of things I learned about processing black and white images in RawTherapee is I get the best results when I don't use any of the automated selections that attempt to make an image look good.  Rather, I take the center of the curve and raise it until the mid-tone and highlight relationships look correct to my way of seeing.  Then I mess around with the "lightness" and "contrast" sliders.

Looking closer I saw that RawTherapee set their starting point at a "linear curve."  This is the default/baseline post-demosaic and post-color-management state.  The man page says...

Linear curve ~ This represents the unaltered (or linear) image, so without any tone curve applied. It disables the curve...

No wonder I didn't understand what the problem was in RentWare and the need for "linear profiles."  I already have and frequently start from a proper linear/flat curve/profile.  As far as I can tell, the problem of linearity doesn't apply to my particular process flow.

Beating a dead horse, RentWare and proprietary software work to give us a good looking image in as few steps as they feel they can get away with.  In so doing they take away a far amount of control early in the pipeline control.  Taken to it's (ill?)logical end, mobile phone image processing takes us even further down the image processing pipeline before it hands control over to us, the user.

One last thought: While I can't really comment on how efficient RentWare re-linearization is, I wonder how much color distortion is being introduced into the process?  If these are dcp format files applied at the color management stage, the RentWare linear profiles probably are just fine.  I couldn't help but ask, that's all.

 

la traversee de Paris 2024

Tuesday, January 16, 2024

Random Thoughts on Photography [9] ~ the horror of "Curves"

Recently I took a quick look at color management.  

 

la traversee de Paris 2024

 

The science of human perception in colors is vast and by now pretty well understood.  You can read various studies and findings here on the internet.  You can also find standards that have been developed based on human perception models of colors and luminescence, so I won't try to bore anyone with the details.  

Of course there are many ways of "getting there from here."  There are algorithms that implement various aspects of color science.  Unfortunately these are the kinds of things that are for better or worse hidden from consumers to a certain level.  On the other hand camera manufacturers and RentWare do their best to present us with "pleasing images" by making early decisions for us.

Back when dinosaurs roamed the earth and film was what we used to make a photograph, we were completely dependent on what manufacturers produced.  It was their job, the manufacturers that is, to find ways of pleasing us the consumer so that they could sell more product. Now we have software that does similar things, trying to simplify as much as can be simplified, while offering the consumer some sense of control over the final "look" of their images.  

Rarely do we have access to something that offers us complete control, from the yet to be demosaiced RAW to fully color managed and ready for additional user input stages.  It's rather like watching an alchemist mix up egg white/ether/gunpowder/silver nitrate on their way to coating a glass plate in the wet-plate collodion process.  When dry plate and film were introduced all the "dirty work" is taken care of for us and we didn't have to be alchemists to make a photograph.

In trying to transfer image processing settings from one system to another I noted that camera manufacturers and RentWare companies tend toward "closed" products.  That is, they can "bake" into their image processing software their own "special sauce" and it is very difficult to understand what's going on behind the scenes.  I can imagine this is confusing and might have led to the rise of on-line discussion forum "color science" wars.  No one really knows much of anything, but people seem to have a lot of ideas.

RawTherapee is a software for image processing that doesn't hide much, if anything.  I'll bet this is one of the reasons people tend to avoid it.  Yes, it's Open Source.  Yes, it's free (as in liberated).  Yes, it can be very very complex.  But, if one understands what they're doing, many of the "icky bits" can be streamlined into a simple one button process.  In fact, the application provides many pre-configured processes.  I've found it a feature rich software.  The engineers seem to really know what they're doing.

One of the things I noticed while trying to understand the demosaicing and color management process was that once I got through these stages that there are several different kinds of "Curve" mode algorithms.  Ooph.  More to learn and understand.  Will this never end? Possibly not.

Girding the loins, I set out to fill in my knowledge gaps about "Curve" modes.  Here is what RawTherapee's man page has to say.

Curves ~

Standard ~ ....the values of each RGB channel are modified by the curve in a basic "correspondence" method, that is the same curve is applied to all channels...

What I've experienced in using this "Curve" is that manipulating the luminosity curve equally manipulates, and quite strongly, the RGB channels.    Changes in luminosity and contrast directly effect color saturation.  Strongly increased contrast yields strongly increased color saturation.  This is "un-natural" in terms of human persception of color.

Strong "Curves" manipulation can distort information and I have to be very careful to guard against loosing subtle tonal gradations in shadow and highlight regions.  Interestingly, this is the classic "Curves" algorithm that I'm sure everyone knows and loves. We've all gotten rather used to these effects and have learned to either work around them or embrace them as part of our images "look."

Weighted Standard ~ ...use this method to limit the color shift of the standard curve...

The algorithm decouples by using a different algorithm the behavior of the RGB channels in relationship to the luminosity curve.  Color saturation changes are much more subtle, in fact.  More subtle than the following "Curve" type, "Film-Like".  Weighted Standard looks and behaves more similarly to "Luminance" and "Perception."

Film-Like ~ The film-like curve provides a result highly similar to the standard type (that is strong saturation increase with increased contrast), but the RGB-HSV hue is kept constant - that is, there are less color-shift problems. ..

This is pretty interesting, actually.  Color saturation shifts remind me very much of film.  I still have to guard against subtle tonal loss in the highlights and shadows, but I've found "Film-Like" to be a little more manageable than "Standard."

Saturation and Value Blending ~ This mode is typically better suited for high-key shots...

This mode is a strange one to me.  I read the description and can't make heads nor tales of what's going on, which means yet another learning opportunity is presenting itself.  I guess I just don't shoot enough high-key to know how best to use this "Curve."

Luminance ~ Each component of the pixel is boosted by the same factor so color and saturation is kept stable, that is the result is very true to the original color...

And here is something that I'm still coming to grips with.  Using  "Luminance" or the next mode "Perceptual" which show "true to the original color" illustrate for me just how bland the real world is.  I begin to see just how much color distortion we've become accustomed to.

Coupling "Luminance" with LAB color space functions can bring an image more in line with "Standard" and "Film-Like" saturated color images.  But there's a subtle twist.  Highlight and shadow areas are controlled in a LAB color space and can retain more/different information.  This is an interesting option for potentially unique image processing.

Perceptual ~ This mode will keep the original color appearance concerning hue and saturation, that is if you for example apply an S-curve the image will indeed get increased contrast, but the hues will stay the same and the image doesn't look more or less saturated than the original...

If I really need saturation, there's always either the traditional RGB saturation slider or the LAB colorspace "chromicity" slider.  This mode looks similar to "Luminance."

This is a lot of information to digest, particularly if one isn't used to so many options.  What I've taken to doing is working with an image and trying each "Curves" mode to see what it does, and as a learning exercise to try to match "Standard" and "Film-Like" output.

For instance, I recently re-worked an image that I'd first processed using "Standard".  This time I worked it using "Luminance" curves mode.  I pushed hard on the "Chromaticity" LAB color space slider to get a similar looking image.  But with "Luminance" I was able to retain good highlight and shadow detail that I found to be more properly rendered.

In another instance, I worked with "Film-Like" and used gentle curves manipulations.  The output of that process is fairly "natural".  Using "Luminance" on the same image, again using subtle curves manipulations yielded what felt like a color-drained scene.  Again, it illustrated to me just how much we've come to accept wildly saturated images as "normal."

I have a few more things to share about curves and such.  But I'll save them for another time.  Until then, I will continue to work with RawTherapee's curves modes and see if I can begin to match the various proprietary software outputs.  As a further learning exercise, of course.

Saturday, January 13, 2024

Random Thoughts on Photography [8] ~ color management and color "science"

Chateau de Sceaux ~ 2023

 

I've been thinking and poking and looking at the topic of color management in light of manufacturer claims.  Certain camera companies claim to have a "secret sauce" that is called "color science."  I wanted to consider this and see if there might be ways of pollinating these much vaunted "color sciences" across other manufacturer flowers, er, cameras. 

Background ~

I smile as I remember the battles that were waged on-line over who had the "better color science", Canon vs Sony.  I still read comments from Fuji users who've fallen in love with Fuji's film profiles.  Leica users wax poetic about a certain "Leica look."  And for years I've read how Hasselblad has the "color science" of any of the commercial camera manufacturers.

Is it real or is it marketing blather?  Perhaps only real "color scientists" know for sure.

On a practical level and before I dove into this topic, what I experienced was that my old Canon images are always redder than my very neutral looking Sony output.  Reviewing carefully Fuji X or GFX images I can't see where they are demonstrably different nor better than any other products on the market.  I can't look at an image and "see" any Leica Magic coming out of Leica products.  Nor am I convinced that Hasselblad has a better understanding than anyone else in the industry about how to make colors.

Yes.  I'm a "color science" skeptic. Why?  Well, it's a very long explanation.  So sit down, strap in, and hold on tight.  Here I go.  Yet again.  Bashing some drum or other.  This time it's "color science."

In-Camera Pipeline ~

Here is how I understand digital imaging to work, from start to finish.

  1. Light hits a light sensitive diode
  2. Diode spits out a tiny electrical signal
  3. Hardware applies gain to amplify the signal (ISO sets the amplification level) 
  4. Amplified signal enters an analog to digital converter (ADC)
  5. Digitized information exits the ADC (may receive additional signal processing as in the cases of extended ISO settings)
  6. Now there is a fork in the process ~
    1. Digital information passes through an ASIC that processes digital data and write it into an in-camera generated JPG file
    2. Digital information is written into a RAW file

We now have two file possibilities, RAW and/or JPG. 

A JPG file is the result of what the manufacture implements, purely and simply.  It has passed through an in-camera de-mosaicing algorithm of the manufacturers specification as well as other imaging algorithms (where various "looks" can be applied).  

JPG image processing decisions have been made by the manufacturer, so if there's a special "color science", it can be applied here.

Off-Camera JPG Pipeline ~

It is potentially important to note that further processing on a JPG will start with the  limited color range of 8 bits as found in the file. Heavy processing can distort colors, destroying any special "color science" that had been "baked" into the JPG file.

Off-Camera RAW Pipeline ~

RAW files require additional processing.  If your software has the possibility of displaying true RAW file information, you'll see a strongly green cast mushy image sort of thing that has red and blue dots evenly sprinkled around the image field.  In short, it's an ugly mess and it's easy to see why we need software to sort things out for us.

Here is the Bayer RAW format pipeline.

  1. RAW file is loaded into software
  2. De-Mosaic is most-often the very first function performed on a RAW file, where individual "pixels" of RGGB (red, 2x green, blue) information is processed using surrounding "pixel" data to calculate color and luminosity such that there is now a different kind of information (de-mosaic calculated) for each "pixel", changing each and every original red, 2x green and blue locations into full color with brightness
  3. De-Moscaic'd images still look ghastly as the colors are still "off", so...
  4. Software "massages" a de-mosaic'd image and displays it to the user
  5. User can now begin performing additional modifications using the tools the software provides

Where's the "color science" in RAW image processing?

There are a couple candidates for where a manufacturer can add their own "special sauce."

The first place we can ask "what has a manufacturer" modified? is the de-mosaic stage.  From what I've read, this step would be an incredibly difficult place to add a dash of "special sauce".  The algorithms are complex and varied.  Have a look at the de-mosaicing possibilities in the Open Source Software RawTherapee and you'll see what I mean.

It turns out there are very good reasons to use the various de-mosaicing approaches, but if you are a RentWare user, I suspect you have zero control over the de-mosaic stage.  Please tell me if I'm wrong.

In any event, I feel that "color science" is not applied at the de-mosaicing stage.

The next stage is where things can get a little more interesting.  This is where camera (better said, sensor model) specific color management comes into play.

Some RentWare and Open Source software use .dcp format files to store camera-specific color corrections.  Hasselblad embeds camera-specific color corrections in proprietary XML formatted files.  I'm not sure what Sony, Canon, Nikon, Fuji, Olympus, or Panasonic proprietary software use, but rest assured, both de-mosaic and color management functions are "baked" into them.

In nearly all cases, camera-specific color corrections are hidden from the user.  However, there are instructions on how to generate .dcp color management files for RawTherapee.  Reading these instructions is insightful and related directly to the question at hand.

Taking industry standard color charts, camera model to camera model variations can be corrected.  If you use RawTherapee, for example, have a look at the Color Management tool and compare the colors between "Camera Standard" and the automated camera specific colors and no corrections.  The differences are illustrative.  Then look at properly color managed files from different cameras and tell me where you see any differences.  Using non-manufacturer software tends to eliminate the effects of any "special sauce." 

If a camera manufacturer is to inject some special "color science" during RAW image processing the color management stage would be an excellent place to do so.

What Else? ~

It is _after_ the de-mosaicing and the subsequent color management stages that color look-up tables can be applied.

Color grading (not to be confused with color management) can be a one button exercise in many software.  Many times these can be implemented as LUTs.  Video software as well as still photography RentWare come with many LUTs pre-installed.

With proprietary and RentWare software you get the de-mosaiced, color managed file displayed after RAW file import.  Then there is this  thing that they provide that adds a special "look."  For Sony it's things like "Neutral", "Standard", "Vivid", "BW", etc.  That's the color look-up table (LUT) stage.

What's potentially interesting here is that as a user you can generate your own LUTs.  Of course you can buy them, too.  What's nice is that the user controls the output.

Last Question ~

Might it be possible to somehow "borrow" from one software to another and preserve a manufacturers recipe?

Maybe.  It depends on several things.  If we can import an industry standard color diagram and have a manufacturers color management applied to it, then yes, there's a possibility we can duplicate a "special sauce" and move it to another software.

Yet I have my suspicions.  I see manufacturers have generated closed system software where the early image processing steps are hidden from the user.  We don't know exactly what parameters are being used and we're in the dark about many potentially important details.  So, in the end, I'm not sure how successful we can be, but we might be able to get surprisingly close.

Another approach could be to open a manufacturers proprietary software with an image who's processing we like, then next to it open another software and change processing settings until the image look the same, then save the actions or generate a LUT.  This is much more "hit and miss" and would not be very accurate.

By hiding what manufacturers are doing behind the wall of proprietary software they can make any marketing claim they want.  To make matters worse, RentWare does exactly the same thing. There is no way of verifying any of these claims.  Further, such claims can then be picked up by communities of photographers where they can take on a life of their own.

On the other hand, what I'm experiencing these days is that by more deeply understanding each and every step in a RAW file processing pipeline that I am at liberty supported by increasing knowledge to create images that are pleasing to me.  Matching one companies "color science" to something I'm working on has become a fun and interesting challenge and I'm very happy when I get very very close, if not precisely spot-on, to something that's been "secret sauced" hidden behind a proprietary/marketing "color science" wall.

Wednesday, January 10, 2024

Random Thoughts on Photography [7] ~ SuperResolution, a better solution

I spent a bit of time looking at single image and multiple image stacking UpScaling.  As a friend suggested several years ago, there's a simpler way.

If one has the time, image stitching for true SuperResolution is a very fine solution.  Having a tripod helps.

Consider the following.  I used a Sony A6300, a Sigma 19mm f/2.8 EX DN E, a tripod, and a small "L" bracket that I made before moving to place where access to materials and tools is very difficult to non-existent.

There's a trick to using the "L" bracket and it's simply this: I have to make sure I put the pivot point directly under the lens' nodal point.  In practical terms, make sure the tripod head mount screw sits under/aligns with the lens aperture location.  When done in this way and by swiveling the tripod head to capture overlapping sections of the scene, image stitching is a breeze and all image elements quickly align.

The dimensions on the long side is well over 9,000 pixels.  100% crops are along the top.  The full scene is under the crops.  There's no lack of definition/resolution.  

My friend was, of course, correct.  There is a simple/inexpensive way of achieving SuperResolution and this is it.

 

Image Stitching for Increased File Sizes

 

Tuesday, January 09, 2024

Random Thoughts on Photography [6] ~ Sigma 24mm f/3.5 DG DN vs Nikon Nikkor 24mm f/2.8 Ai

I've had a nice Nikon Nikkor 24mm f/2.8 Ai up for sale for quite some time now.  The reason is that last year I decided to put together a full frame kit based on AF lenses.  So I sold nearly all of my manual focus Nikkor glass.  I'm getting old and focusing the MF optics, no matter how brilliant they are, present focusing challenges that I'd rather not spend my time dealing with.  So, home came a Sigma 24mm f/3.5 DG DN to replace the Nikon 24mm.

Not having compared the two lenses I thought I ought to, just in case I discovered a "need" to hang onto the Nikkor.  So using a couple lens test charts that I printed I set out to see whatever there was to see.

Mind you, I'm not measuring anything.  Other people do a better job of such things than I.  However, by comparing one lens against another using the exact same setup I should be able to illustrate any differences.

What I find is that the 

  • Nikon, without software intervention, is a little soft in the corners compared to the Sigma, but...
  • Nikon appears to have just a hint of something sharper/contrastier in the center of the frame.  Maybe.
  • Sigma has software intervention correctable pincushion distortion 
    • There seems to be little to no impact on "resolution" in correcting the Sigma's distortion in software
       
  • RawTherapee software sharpener intervention called Capture Sharpen pretty much levels the field

Recently I had a Wild Hair to try IR imaging and the Nikkor would be perfect for that.  It has an IR focus mark, whereas with any AF lens I have would be a guessing game as to where critical focus is in the IR range.  So, if I can't/don't sell/trade away the Nikkor perhaps I'll eventually be able to put it to good use.

Until something happens the old Nikkor rests peacefully in the toy closet.

 

Sigma 24mm f/3.5 DG DN, Nikon Nikkor 24mm f/2.8 Ai ~ Comparison Capture Sharpen