Thursday, March 30, 2023

Ai poses an enormous problem ~ it's much more than image scanning and Copyright issues

[updated 30 March, 2023 - see the end of the post]

The question of Copyright and AI scanners can thundering home recently when I learned that my Flickr images have been included in at least two major AI datasets. So, I submitted the following Help Request to Flick.com - 

Dear Sirs, 

Using tools available to see if my images have been consumed by "AI" scanners I see that, yes, indeed, my Flickr images have been scanned. 

I am very aware of Copyright and it's limitations. I used to advise legal council at the companies I worked for in technology. I did NOT give anyone the right to scan and then use my works. 

I hold copyrights to all but one of my images, and I believe there may be a serious issue by allowing "AI" scanners to do what they are doing. 

What is Flickr's position on this? 

La grève des éboueurs ~ Paris 21 March 2023

 

Indeed, among may other tasks, I helped advise the lawyers at the last company I worked for.  I'm fairly familiar with what is and is not allowed, and where the gray areas are in copyright law and its application.  Perhaps Flickr had developed a position on the topic?

Their reply, from what I can tell, is likely a lawyer reviewed/sanitized/canned message that gets sent to everyone who asks about AI -

"Hi there,

Thank you for your input on this matter.

As this is a new and emerging space, we have not yet fully reviewed how Flickr will fit with AI images and photography. 

At this time, there have not been any changes to the Copyright Act to address AI generated images.

It is something we are reviewing closely and should there be any changes that would affect the whole platform, we will certainly notify our members.

As always, if there are instances where a creator identifies their copyright or license has taken place, they can submit a claim through our DMCA process for review and actions here.

Best,
Doug

We're updating the terms of Flickr's free accounts to strengthen our community and the long-term stability of Flickr. Read more here.
"

La grève des éboueurs ~ Paris 21 March 2023

 

First, I fail to see where there is something unique about AI that could require legal redefinition/reapplication.  Secondly, Flickr recently announced a new area of AI generated images where people can post the output of AI software.

It feels as if Flickr is giving this a rather big "pass."

Class Action lawsuits have been filed and I wondered how to join. One site I visited indicated that _normally_ 

"...you don't need to do anything to "join" a class action. If your legal rights are affected by a class action, you usually will only need to get involved once the case settles. In most cases, you will need to submit a claim, either online or through the mail, to receive your portion of the settlement or judgment...

Digging into Flickr's past I found that four years ago it was known that Flickr had been scrapped by an AI in a project that involved IBM. 

"...“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz..." 

One more thing. There was an article published back in 2018 on scrapping Flickr for "deep learning experiments", complete with code to implement a scrapper where it was noted that - 

"...1. It is not legal and ethical to scrape some websites 

2. Even if you were not concerned about the law or ethics, scrapping at scale can be challenging particularly if there are safeguards in place to discourage scrapping...

La grève des éboueurs ~ Paris 15 March 2023

 

Which begs the question: Did Flickr allow scrappers/scanners, or did they just not care and let it happen?  Afterall, they'd published an API that people could with as they pleased.

Any way I look at it I'm wondering if the benefit stills photographers in the "Pro" program _pay_ for out weighs the huge issue of creative copyrighted works  AI poses and has posed for several years. Flickr may on some level be culpable.

I follow a number of tech websites to try and keep current with the State of Things in science and technology.  Something very concerning turned up just the other day (March 24, 2023 to be exact).

After listening to the podcast I came to the understanding and realization that the problems posed by AI far out-strip the scanning of our images.  

AI is out of control and, if not contained, might pose serious risks to us. As this fire is building to disturbingly large proportions, Microsoft has laid off key personnel from their AI Ethics team.  Isn't _now_ the time to staff-up ethics teams on AI?  What the h*ll is going on?

Ack!  Something very important is happening right this moment and I've not been following the ethical and moral implications of AI as closely as I could have.

If you're at all interested in this topic, have a listen. 

[Update 30 March, 2023] - Some people are calling for an outright ban on AI.

La grève des éboueurs ~ Paris 21 March 2023
 

Yes,the images used to illustrate this blog entry
were carefully chosen to set the mood


Sunday, March 12, 2023

One last Wabbit Hole ~ and it's a whopping BIG one, too!

Accidental Renaissance ~ Retromobile 2023
A little "accidental Renaissance"
to keep things interesting in prep for
reading through somewhat dense material.

 

[References section updated twice ~ 4 March, and 3 March, 2023]

After whinging and whining about the lack of verifiable, accurate knowledge, I find myself at the bottom of an Enormously Vasty Wabbit Hole.  At the top and just as I fell in I saw it was labeled  "Knowledge You Are Looking For."

The areas I wanted to learn more about but was having a Devil of A Time finding anything useful/correct/truthful included dynamic range, sensor noise, and color depth. I was still poking at the things that might go into defining what a "Fat Pixel" camera might be made up of.

Apologies first: For this blog entry I'm going to move very fast and rather deep.

There are nuances and details that can and should be applied to each of the following statements.  Each step can be an entire study unto itself.  As always, don't trust me, but if you must, make sure you verify.

Here is my present state of understanding.  

Sensor Basics ~

~ There is an analog input side of every sensor ending at analog to digital converters (ADC) with the following components -

  • Light sensitive photo site
  • Analog amplifier
  • ADC

~ There is a digital output side of every sensor starting at the ADC with the following components -

  • Digital image processing chips for stills and video
  • Digital image processing software for stills and video

~ ISO controls the gain on the _analog_ amplifier that feeds the ADC

~ RAW files are made up of the digitized data spit out of the ADC

~ jpg files are the result of the ADC output _plus_ whatever massaging the manufacturer applies on the digital circuitry in-camera

~ In-camera jpg processing _may_ further increase gain digitally (as a second gain function)


Dynamic Range ~

This is the EV difference between highlight areas with detail and shadow areas with detail.  Sensor noise reduces dynamic range.  Said another way, the quieter the sensor the greater the dynamic range.

In general...

~ The broadest dynamic range is seen at the Base ISO

~ The higher the gain on the analog amplifier (increasing ISO) the lower the dynamic range


Sources of noise ~

~ The analog amplifier (where signal gain is first applied) that feeds the ADC and ADC itself are the primary sources of noise in RAW files (see Canon DSLRs)

~ In-camera jpg processing digital circuit may further increase gain and thereby add noise downstream from the analog circuit.  In general, if there is noise on the output of the ADC, unless there is noise reduction on the digital side, we will see noise out of the digital circuits, too (see Canon DSLRs).

Sensors ~ ISO Variant

These are the traditional CMOS sensors that we've all come to know and love.  Until very recently, these were the only ones commercially available to us.

~ The electron gathering well at each sensor site has these properties -

  • Traditional big and somewhat shallow electron gathering well design
  • When used at low ISO settings (ie: low analog amplifier voltages) we get -
    • Broad dynamic range
    • Slight pixel to pixel variations (subtle noise, if you will)
    • Lowest noise/broadest dynamic range tells us what the Base ISO is
  • As ISO increases -
    • Dynamic range decreases
    • Noise increases... maybe... (see astro-photography reference video below about cases where this is _not_ entirely true)

 Recent Sony Design Enhancements ~ ISO invariance

Sensors starting with A6300, A7RIII, A7III, and very obviously the A7SIII, as well as the Sony manufactured MF sensors used in Fuji GFX cameras have _two_ electron gathering wells per photo-sensor site.

~ First electron gathering well at each sensor site has these properties -

  • Traditional big and somewhat shallow well design
  • Used at low ISO settings
  • Broad dynamic range
  • Slight image to image variations (subtle noise, if you will)
  • Lowest noise again tells us what the Base ISO is

~ Second electron gathering well at each sensor site has the following properties -

  • Narrow but deep well design
  • Used for higher ISO settings
  • Shows reduced dynamic range
  • BUT it shows _less_ noise than the big, shallow electron well
  • Highest dynamic range when using this higher ISO than base ISO 1 well-type sets a Second Base ISO
     

Interesting property: ISO invariant sensors tend to show no change in noise as ISO is increased after the Second Base ISO has been switched on.

Note: There are examples of sensors that are ISO invariant from low ISO, such as the Nikon D750 where noise levels do not change from ISO200 on up. See the astro-photography reference video below for the Nikon example.


Comments ~

 Taken as a whole I feel I can begin to understand a few seemingly unrelated things.  Such as -

  • Early Kodak CCD MF sensors showed the best image quality at the time.  This, even if they _expanded_/_amplified_ the ADC output from 14bits to 16bits.  At least they started out at 14bits when everyone else was down around 10bits of RAW and/or stuck at 8bits jpg.  I can see where the "Fat Pixel" ideas could stem from in terms of perceived image quality.  What I've learned is that modern sensors can easily outperform the original Kodak MF sensors in every measurable way, _and_ we get ISO flexibility with current sensors where Kodak worked best at Base ISO, period.
  • I now see (on Photons to Photos - see reference link below) the differences between the old 14bit ADC Canon 5D MkII and 7D sensors and the _12bit_ ADC Sony A6000.  In one release of products we went from "ya, that's not bad but if we're not careful there's loads of noise in the shadows" to "wow! now there's a clean image".  The A6000 has more dynamic range at 12bits ADC and, as hoped for, less noise than the older 14bit ADC Canon sensors.  There is likely something about Canon's analog circuit that was rather prone to the introduction of error.  It's interesting to me that it's only with the release of Canon's new mirrorless cameras that their sensors seem to meet or slightly exceed the early Sony FF sensors.  This tells us something about how much work Canon has done on the analog side photo-site to ADC path.
  • Looking (again on Photons to Photos - see reference link below) at first generation Sony A7, A7S, and A7R, there's not really all that much difference between these models in terms of dynamic range and noise.  Yes, the A7S has slightly better dynamic range than the A7, but I'm not sure I'll ever actually see or appreciate the extra 1/3stop EV the A7S has.  I've read where we can begin to see differences of 1EV dynamic range, but I haven't verified that for myself.  Coming back to the Photons to Photos information, the A7R looks nearly as good as the A7S.  In the end, if I can't make a great "Fat Pixel" image with any of these three cameras, I'm doing something drastically and dramatically wrong.
  • ETTR (Expose To The Right) "works", but not for the reasons we're commonly told.  It "works" because we are able to bring the dark areas up out of the base level noise.  Of course we have to guard against saturating the highlights beyond recovery, but once the data has been collected, it's more flexible in processing than images where the dark areas are down in the noise base of the sensor.
  • ETTL (Expose The The Left) "works" as expected.  We use ETTL in black and white photography as a "lazy man's way" of guaranteeing as much detail in the highlights as possible.  This can closely emulate film images when processed appropriately.  The highlights are raised in processing and the shadows raise with the highlights. As an aside, I've started to use Zebras to know when the highlights are saturated.  Coming back to ETTL, with quiet sensors we might be able to avoid distracting noise to the shadow areas.  Which leads me directly to the next item.
  • One of the problems I had trouble understanding why there is so much noise in the shadow areas of severely under-exposed images even at 100ISO where I thought we should see the lowest noise everywhere across a broad dynamic range image.  It turns out the noise is easily explained.  The electron gathering wells aren't able to gather enough information consistently across the wells to make the dark areas appear as smooth as the light areas.  The light simply is not available at those low levels to distribute evenly. Hence noise, even at 100ISO.  To avoid this problem in a single shot, use ETTR or better yet Zebras.  If the dynamic range of the scene is broader than a sensor can handle, there's HDR and image stacking during processing.
  • Potential recent "Fat Pixel" candidates could be the Sony sensor manufactured Fuji MF GFX cameras.  Their dynamic range exceeds by 1EV the best Sony FF.  The MF Fujis are well above my current Pay Grade and I doubt we'll ever see them go for less than 1500USD used.  Yes, I know.  The 50R is trading hands on the used market for around 2250USD.  It's still too rich for me.  But there is something to be said about well-engineered sensor development and ISO invariant circuitry.  Kodak sensors were never ever close to being this good.
  • I now understand why backside illumination of a sensor "works."  By raising the black base to a known level, sensor noise levels are suppressed by starting at zero (pure black) well above the potentially noise inducing analog circuits.  It's a neat trick, actually.  I feel there are some creative solutions being applied to sensor design these days.  This is Fun Stuff.
  • Pursuing the absolute best color depth, longest dynamic range, and lowest noise images possible requires a static subject, a tripod, setting the camera at its Base ISO 1, and shooting three or four identical images.  Three image stacked low ISO photos "work" because the process averages out the subtle fat/shallow electron well variations.  The final output image quality should easily exceed that of, well, just about anything.  And speaking of which...
  • Sony has come up  with something interesting in their dual Base ISO sensors.  While destined for video work, I can see where there will be benefits for us stills shooters, too.  Even if we're stuck at 8EV dynamic range at Base ISO 2, I'm still struck by the possibilities of lower than Base ISO 1 noise.  
It's a Crazy Topsy Turvy Counter-Intuitive World out there and I can't wait to see what the Sensor Development Wizards come up with next.

Until further notice I have stopped reading the thoughts, evaluations, and conclusions of the vast majority of popular still-photography related websites.  In general they are at a distinct lack for meaningful data on sensor performance, dynamic range, color depth, and noise characteristics.  I know they're "trying" the best they can, but I've found that I have a difficult (Gear Grinding) time with the Marginal at Best "test" methods and bad data they attempt to analyze and justify their conclusions with.  It's become just "noise" to me.
 
Sometimes a person has to dig deeper to get at truth.
 

References ~ [updated/modified on 4 March (correcting a link to the most important video, the first, and updated slightly on 3 March, 2023]

Here is a video with very clear, concise explanations of Sony sensor behaviors.  

I had to pause the video every other sentence, or so it seemed, so I could stop and think about and think through what was just said.  There is so much _good_ information here.  In fact, _this_ should be the reference for discussion of sensor performance.  I wish more of YouTube was this accurate and informative.  I've found there is too much "squishy thinking" go'n on out there!!!  So this is an absolute breath of fresh air.  

It's easy to see how much this video influenced the organization of my thoughts.  While I knew various pieces and parts of the process, the video helped me organize what I knew into a cohesive whole.  In fact, I've borrowed from the video outline in the writing of this piece because I found it that useful.

Photons to Photos has a great site on measured dynamic range, noise, and sensor performance. Complete with methods and rationale explaining what they do and why. For some reason I've not run into this site until now.  There were many spent hours looking around, reading, and comparing various sensor data.  It's amazing how much a person can learn if they're patient and aware and take a few notes along the way.

Here is a video on ISO invariance and why it's interesting and useful.  There are also comparison images showing how ISO variant and invariant sensors behave differently. 

I often wondered what changed with the introduction of Sony's cameras around 2012.  Reading Jon Rista's explanation from 19 January 2015 - 01:30 PM gives a clue.  While I might not agree that 500nm waffer technology was a problem (except as an example of Canon not keeping up with the latest waffer fabrication trends), the rest of his argument rings true.  Further, Jon Rista gives us a few potential clues about "Fat Pixel" photo-sites from Posted 29 January 2015 - 12:09 AM (pay attention to his comments on photo-site area and electron well capacities). For a Geek who really wants to Geek Out it's very interesting and potentially practical stuff

Lastly, here is a link to a Wiki page where we can see a Sony camera and function matrix.  In addition, Sony has a matrix of ADC bit depth information.  This too is organized by camera and function.  Using this information we can begin to guess how much circuitry is implemented in the various cameras that influence things like read speed and camera capabilities.

To me the most important outcome of all the Geeky Nerdiness is the acquisition of knowledge that can be applied and balanced in the real world when pursuing highest possible image quality.

Update of a common 'net expression from when dinosaurs roamed the earth: Base ISO 1, f/8 and be there.  

If it's dark out, Base ISO 2, wide open and be there.

Tuesday, February 28, 2023

Reconsidering the Sony A7S... [part Three]

My Gears have been GroundI have vented my spleen.  And I feel I finally have my arms around the nature of the problem.

The problem is this: Many of the camera review sites post numbers grading various aspects of sensor performance that are for various reasons problematic.

I don't yet have a solution to the problem.  All I know is that converting images to TIFF, using in-camera JPG processing, or image downsizing to "normalize" (whatever that means in the context of dynamic range and color depth) all have their problems from an image quality measurement and comparison point of view.

Further, I don't see a way to evaluate if Sony's claim of 15EV+ dynamic range for their A7S is "real" or not. I'm not sure if this small 12mpixel Full Frame sensor-ed device is part of the "Fat Pixel" family of Mythic Pixie Dust cameras.  If it is, how might we _see_ or _measure_ the Magic?

Not knowing entirely how to proceed I will set all this aside and go have a long think.

In the meantime, the friend who shared how some standalone Noise Reduction software can work its magic on noisy images suggested something to me.  If I had a problem with downsizing A7R images to A7S image size (where the downsized image is _always_ better with large sensored cameras in DxOMark and DPReviews reviews), why not upsize the smaller image to the A7R size?

Hence this blog entry.

Over on YouTube there is a video by a woman who printed a couple A7S images to 47 inches long.  That's rather big, right?  Is it any "good?"  Hmmm...

There's a guy who ran a comparison study with two photographer friends where they tried to see any differences between A7S and A7 (24mpixel) prints.  It seems that differences between the A7 and A7S are best seen when comparing identical images.  When looking at standalone prints it seems much more difficult to tell which print was made by which camera.

Then there is the ability to upsize images in a somewhat meaningful way.  I've been looking at this a little and have to say, the results can be impressive.

There's lots of food for thought, here.

Setup~

  • Sony A7S image opened in the Gimp
    • Upsize from 4240pixels to 7300pixels
    • Apply 1pixel USM in upper layer with 70% opacity

Comparison ~

 

DPReview image

DPReviews base image 

DPReview image ~ reworked 

Processed image

Sony A7S vs A7R IQ Comparison 3


Comments ~

Let's start by looking at the 2nd, and 5th rows of images.  The 2nd row shows the native file size Sony A7R at 100 percent.  No noise reduction nor Capture Sharpen were used.  When I started this WeeLookSee I thought I might see a clear difference in resolution since the A7R has no AA filter and the A7S does.  So if this is as sharp as the A7R is without any further processing, have a close look at row 5.

Row 5 is the A7S image Rawtherapee Capture Sharpened and noise reduced, _then_ upsized 7300pixels long using the Gimp NoHalo algorithm.  Pretty amazing, isn't it?  Stare at it awhile.  Still amazing, right?  The most obvious difference to me is in trying to read the "One Way" sign in the middle of the center column of images.  Amazing.

Now let's have a look at rows 4 and 6.  Row 4 is the native file size A7R image with Rawtherapee Capture Sharpen and noise reduction.  We can just begin to be able to read the "One Way" sign.  Noise is reduced.  The over all image is looking not half bad.  Row 6 is the upsized A7S image with a 1 pixel UnSharp Mask applied.  While it looks pretty good, it should be obvious that there is more detail in the A7R image.

Lastly, looking at just the A7S upsized images in rows 5 and 6 and not trying to compare them against any of the A7R images, what do we see?  They actually look pretty good, don't they?

While I knew it already, this exercise re-enforces to me that software can play an important role in image processing.  When done with care, upsizing the 12mpixel A7S to A7R dimensions can yield interesting results.  

As I wrap up this blog entry I have to confess that I've had that long hard think.  I've thunk a bit.  I've cogitated some.  I've studied a lot.  I've learned a bunch.  I've _finally_ found a Vein of Knowledge that is proving rather useful.  

Yes, Martha, there is One More Wabbit Hole to fall down.

Sunday, February 26, 2023

Reconsidering the Sony A7S... [part Two]

I guess my Gears are easily Ground.

It started many years ago when I rode motorcycles.  My Gears were Ground by various reviewers who either said things that were silly, all too often wrong, or failed to mention things that would be important to a rider.

The first example is of a review written about a Kawasaki touring bike.  The reviewer noted all the usual things, except one.  The tourer had a clearly _over_ spec'd alternator.  Why would that be? Well, Kawasaki knew that buyers of their tourer would want to add accessory lights and things that would use the extra juice.  But reading the review, no one ever knew what the standard issue bike was capable of.

The second example involves the early Ducati 900SS.  They are narrow, light, and made sufficient horsepower to throw you down the road at 135mph in stock trim.  More importantly was the fact the bike was rock solid stable at all speeds. It was like riding a laser beam.  I kid you not.  The stability instilled a certain confidence.

Compared with this, Japanese motorcycles from the 1980's tended to wander, have slightly vague handling, and might induce a "tank slapper" under the wrong conditions.  I know these things first hand because of the RD400, 550cc Vision, 650 Seca, GS500, and three road-worthy Ducatis (bevel and belt-drive) I used to own.  Fortune was really on my side at the time as I got to ride one-each of every model bike ever made or imported to the US during the early to mid-1980's. I accidently bounced the valves of a Kawasaki 750cc Turbo prototype.   14,000RPM was a bit beyond spec, but the bike survived with zero problem.  Yet it was the Ducati that instilled confidence.

This is what I look for in reading reviews. Confidence.  I want confidence that people know what they're talking about.  I want confidence that their findings are worth considering.

A photography example of what I mean comes from reading just about anything written by Geoffrey Crawley.  His reviews were in-depth, concise as possible, and informative.  If there is a detail that he felt was important to share, he would expand the subject until everything became clear, such as when he wrote about the 1/1000th of a second top speed of the original Nikon F.  I have confidence that he knows what he's written about.

Similarly, I enjoy reading Roger Cicala at Lens Rentals. He posts not just his findings, but _how_ he got to those findings in the first place.  He publishes his methods and _reasoning_ behind those methods.  It's a real joy to read, learn, and understand.  M. Cicala instills confidence.

What fails to instill confidence is when well-established reviewers make decisions, "test" something, assign numbers, and post the "results" without sharing at the same time clear methods and limitations.  Specifically, converting images to TIFF, using in-camera JPG processing, or image downsizing to "normalize" (whatever that means in this context) all have their problems from an image quality measurement and comparison point of view.  But I never knew about the limitations until I dug into the subject.  Information wasn't easy for me to find.

Why is any of this even remotely important to me?  I prefer accuracy and full truth so that I can make the best informed purchase and use decisions that I can.  I'd like to be able to consider the trade-offs as they really are.

In the case of the Sony A7S, I passed on two inexpensive, good condition examples thinking that they were of lesser stills image quality than the A7 or A7R.  I'm not sure what the real answer is, but I'm learning it's not exactly how I read about it on various "reviewer" sites.  Of course it's too early for me to know if it's worth plunking down good hard earned money for one. 

Building on the previous blog entry, I was interested in seeing what happened to Sony A7S and A7R DPReview supplied sample AWR (RAW) images when I applied Rawtherapee Capture Sharpen and Luminance Noise Reduction.  Capture Sharpen should be obvious since the A7S reportedly comes with an AA filter.  Noise Reduction would be applied to see if the heavily amplified dark areas of the scene could be quieted down.  The A7R dark areas in particular look to me pretty ghastly at the native sensor resolution image size compared to the A7S.  Perhaps noise reduction could help the A7R image?

Setup~

     NOTE: the DPReview images were 1.7 and 2EV underexposed and filed under the heading of "Dynamic Range in the real world."  They were trying to share something they "saw" regarding noise control and dynamic range.  So I needed to do what they did, raise the shadows to the point the overall image looked somewhat OK, then consider the noise and dynamic range, particularly in the dark amplified areas.  Where DPReview used Lightroom's Exposure Value slider, I used Rawtherapee's Lightness so as to avoid blowing out the highlights.

Comparison ~

DPReview image

DPReviews base image 

DPReview image ~ reworked

Processed image

Sony A7S vs A7R IQ Comparison 2


Comments ~

Considering how dark the shadows are in the original un-modified image are, they were shot 1.7EV and 2EV under-exposed, the processed results are rather amazing.  I could never ever dig this deep into the shadows with my old Canon DSLR systems.  Sony has done a great job.

Looking again at the original native size images, the A7R shows more noise than the A7S in the shadows.  To me the difference is obvious.  I'll say it again, the A7S native size image heavily amplified shadow area image shows _less_ noise than the native size A7R image.

Then I did two things.  I Capture Sharpened each native size image and Noise Reduced them. 

The Capture Sharpen step makes the A7S image very crisp and sharp-looking.  The A7R gets that little bit of extra sharpness, too.  I've come to like using this function early in my image processing.  Images don't look overly sharp to my eyes.

As I've said else-where, what surprises me is that the A7R no-AA filter image isn't sharper looking than the AA filtered A7S.  I'm not sure how to evaluate this.  Though perhaps it should be noted that the "One Way" sign is nearly readable in the A7R Capture Sharpened/Noise Reduced image.

Looking at the effect of Noise Reduction on the A7S and A7R image shadow areas gives just about what we might expect.  The A7S goes from a small noise to even smaller noise patterns.  The A7R goes from moderate sized noise to smaller though still obvious noise patterns.

A friend sent me samples of what can happen when using a specialized noise reduction software.  The results are impressive.  In fact, I can easily imagine that a properly-processed very under-exposed image can be massaged into something pretty darned nice.  My suggestion would be if the highlight areas aren't showing noise, then mask the shadow areas and apply noise reduction there.  At which point we've crossed over from considering the sensor to taking advantage of advances in image processing software.

Coming back to sensors and working with several Sony A7S AWR (RAW) sample images downloaded off the 'net I feel I'm beginning to understand what Michael Reichmann was saying about medium format sensors and the A7S.  I'm not sure it adds up to much.  I don't hear people raving about the A7S image quality over, say the A7 or sensors from other camera manufacturers.  Perhaps it only matters when the winters are dark, cold, snowy, and I have way way too much time on my hands, but I think there _might_ be something there.

Chasing Pixies has become my day job.  

Oh, but I have one more step to take in this Wacky Adventure.  Stay tuned for part Three.

Friday, February 24, 2023

Reconsidering the Sony A7S... [part One]

I'm still thinking about the Sony A7S.  I'm not sure why, but I am.  Er, well, yes, I do know why I'm still thinking about it.  Something is Grinding my Gears.

Michael Reichmann wrote some years ago about how the A7S files "felt" similar in quality to the Kodak CCD medium format sensor output.  I couldn't help but notice he didn't say the same thing about the A7 nor the A7R sensors.  Both cameras had been on the market a year before the A7S was introduced.  

The 24mpixel A7 camera produces even now beautiful images for me.  What could be better?  Well... maybe there was a years worth of developmental "baking" of more quality into the A7S over the earlier sensors?  Or, as was the thought at the time, the 8,4micron photo site size the Source of Fat Pixel Goodness?

I read the DPReview review of the A7S, found and downloaded two ARW (RAW) samples.  I thought it might be interesting to see how they behaved when subjected to my standard image processing using Rawtherapee and the Gimp.  It feels like there might be an opportunity to learn something using someone else's comparison images.


Setup~

    NOTE: the DPReview images were 1.7 and 2EV underexposed and filed under the heading of "Dynamic Range in the real world."  They were trying to share something they "saw" regarding noise control and dynamic range.  So I needed to do what they did, raise the shadows to the point the overall image looked somewhat OK, then consider the noise and dynamic range, particularly in the dark amplified areas.  Where DPReview used Lightroom's Exposure Value slider, I used Rawtherapee's Lightness so as to avoid blowing out the highlights.


Comparison ~

DPReview image

DPReviews base image 

 

DPReview image ~ reworked

DPReviews base image
with my processing applied

 

Sony A7S vs A7R IQ Comparison 1


Comments ~

The rather heavy Rawtherapee processing was needed to bring up the shadows of the two under-exposed images borrowed from DPReview.  These images are trying to stress the image capture system by amplifying (expanding) the dark areas so that we can evaluate things like dynamic range and sensor noise (particularly in the expanded/raised shadow areas).  Most of the time many of us would never shoot a photograph in the real world in this way, and as we will see in future posts, I have questions about the validity of this as a test method.

The 100ISO base image at native 4246x2840 pixel resolution Sony A7S image looks smooth in the light areas, and slightly noisy in the deep and now raised shadow areas.  This is pretty remarkable to me, particularly when I look at the base uncorrected image and compare it to the lightness and curves processed result..

To my eyes the A7S has lower noise than the 7354x4912pixel A7R image.

We were led to expect this, right?  Seems intellectually correct.  Lower pixel density sensors have lower noise than sensors where the photosites are packed in like sardines, right?

So what's my Gear Grinding problem?  Well, here it is.  DPReview wrote "... Who wins? In a nutshell? a7R, hands down..."

Please tell me my eyes are really bad, or tell me that we see the same thing.  I don't see where the A7R image "wins" in any dimension except size (har!).

The Gear Grinding problem is partially explained in the next sentence.  "...we've downscaled [emphasis mine] the a7R image to the a7S' 12MP resolution, the a7R offers more detail and cleaner shadow/midtone imagery compared to the a7S. Downscaling the a7R image also appears to have the added benefit of making any noise present look more fine grained; the a7S' noise looks coarse in comparison..."

This is where they have me really and honestly Gear Grindingly stumped.  If you're trying to compare image qualities between different sized sensors, why on Gawds Green Terra Firma would someone want to downsize a larger image to the smaller sensor dimensions?  

I'm serious.  Why?  What does it show?  What does it prove?  How would it be relevant to photography except when someone is willing to throw away potential resolution to, what?, prove an Obvious Point?

The Obvious Point being that something called "pixel binning" or downsizing works.  Noise across an image will be averaged out (or reduced).  With that will come more accurate colors (averaging out the chromatic variations brought, in part, by subtle noise, even at low ISOs).

What if, instead, reviewers were to take a 4240 pixel long section out of the A7R and compare directly like size image to like size image?  Adjust for image field by correctly selecting the focal length, of course.  Wouldn't that make for a more honest evaluation, if what you're trying to evaluate were things like dynamic range, color depth, resolution, and noise?

And speaking of resolution, the A7R reportedly comes without an AA filter and is supposed to be sharper than those that come with AA filters (such as the A7 or A7S).  And yet, the A7S appears to deliver resolution as well as the A7R.  Here is Yet Another Wabbit Hole to fall down, but we'll save this for another time.

In the end the DPReview provided AWR RAW sample A7S image looks to me to be smoother and sharper than the A7R _before_ I apply any noise reduction or capture sharpening.  I'll make those adjustments in part Two of this series so we can begin to see what the differences might be as we start processing an image.  

For the moment, however, can someone explain to me like I'm in kindergarten how downsizing an A7R image to A7S dimensions gives us a valid departure point for understanding relative image quality?

Sunday, February 19, 2023

Incomplete understanding can lead to wrong conclusions...

I stepped in "it" recently and I'm here to confess the errors in my understanding.

Retromobile ~ 2023

In photography I've run across for years and years loads of marketing promises, distortions, half truths, selecting data that fits preconceived curves, bending of reality, and outright lies.  

It's in part why I've written here and elsewhere about what I've found.  Methods, processes, setups are explained so that anyone can see for themselves what's what.  You can't (and shouldn't) blindly trust me.  Trust if you must, but verify.

Recently falling down a Wabbit Hole labeled "Fat Pixels", I came to a set of conclusions that, in retrospect, were malformed.  I used DxOMark's color depth and dynamic range numbers to evaluate the image quality of Sony A7, A7R, and A7S cameras.  Based on this I made decisions on gear purchases and spoke ill of the "Fat Pixel" discussion, particularly as it related to the Sony A7S and its "Fat Pixel" 8,3micron sensor site sizes.

Retromobile ~ 2023

What I was trying to evaluate was whether there might be some magic "Pixie Dust" in the small 12mpixel system that I might find good reason to enjoy in my own image making.  Of course I did nothing to evaluate Sony's own 15+EV dynamic range claims.  I assumed (bad, bad, bad, I know) that Sony was simply wrong and gone overboard in their marketing claim.

Well, the actual error I made came from my lack of understanding how DxOMark's numbers are calculated.  I relied on the numbers alone to tell the whole story. In my defense, when I read through DxOMark's comments I didn't see anything that led me to believe they were considering anything but the entire sensor output.

However, reading criticisms of DxOMark I've come to understand that sensor output is _downsized_ from whatever the native sensor resolution is down to 8mpixel before an evaluation of dynamic range and color depth are made.  Right here is the source of the error.

Retrombile ~ 2023

I should've guessed at this and wondered at the time how the numbers were coming out the way they were.  Looking at "test" results for various Sony Full Frame cameras we can see that the higher megapixel count sensors ALWAYS score better than the smaller sensors.  Always.  This is true for color depth and dynamic range.  Spend some time on their site to verify my claim.

How can this be?

In the process of downsizing an image for evaluation, noise is averaged out, dynamic range increases (by lowering the noise floor), and colors become purer.  I know this from my three image stack/average comparisons.  Noise averaging works similarly when downsizing.  We're not seeing anything that comes straight off the sensor when we read DxOMark's numbers.  The image has passed through some kind of conversion process.

There are other sites that compare and test sensors in camera systems, of course.  But these convert from the original RAW file into TIFF.  So we're not seeing anything straight off the sensor there, either.  While this might be a different process to the one DxOMark uses, there is a conversion of the image and I'm not convinced this won't effect the output.  There's no method evaluation information that I've easily come across that guides readers one way or another.

Retromobile ~ 2023

How, then, do we get at an understanding of color depth and dynamic range?  How do we get beyond the various comparison site's use of image conversions which can alter evaluations and results?  If someone knows, I'm all ears.  Please.  I'd like to learn.

As is often the case when I look into commercially available photography tools, I turn to Real World to see what's going on.  In this way, I found a site that allows people to download Sony A7S AWR files.  Grabbed a couple images and put them through my normal Rawtherapee processes.  This is something I should've done earlier, but am happy I finally got around to doing this now.

I can't quantify what I saw, of course.  There is nothing being measured.  All I can say is that A7S AWR files look like great starting points for still image processing.  They look very subtly different than my A7 24mpixel files.  The A7S colors are deep and rich.  The resolution is outstanding after being passed through Capture Sharpen.  The low ISO noise levels are very low, indeed.

Perhaps I should after all pick one up and try it?  I still have a few Nikkor lenses to sell.  Then we'll have a look.  A7S prices are like A7 around these parts.  Cheap, and getting cheaper by the day.

Unexpected Renaissance ~ Retromobile ~ 2023

Saturday, February 18, 2023

Back to the Past... by Leaping into the Present...

It's been a couple months since I made a decision to sell my cherished Nikon Nikkor manual focus lenses and replace them with more current AF Sony Zeiss and Sigma optics.

As I age I'm getting a little more "fumbly" with my cameras and lenses.  'Ol Shaky I'm becoming... and...  my eyes aren't what they used to be... further...  I have little to zero patience working the little buttons that enlarge a scene so I can accurately manually determine the point of focus.

Looking at the Toy Closet and considering how I actually use cameras and lenses I find I'm torn somewhat between APS-C and Full Frame formats.  Yet there is a useful division between the two formats.  I see I use zoom lenses for the most part on APS-C and used my old Nikkor fixed focal length lenses on Full Frame.

Couple this with how I feel about the current state of the various digital sensor sizes as they relate to something I knew quite well, film photography.  It looks like I found a Way Forward.  From what I wrote in a prior entry -

"... here's a thought that ... I've had ... for some time, now.  In a fair approximation, digital sensor output to film equivalents (in terms of image quality) -

  • 1inch sensor == medium format film
  • APS-C sensor == 4x5 inch film
  • Full Frame sensor == 8x10 inch film..."

 

The Way Forward, or perhaps more properly said, the Great Rationalization became -

  • Image Stabilized (IS) zoom lenses for motorsport and travel on APS-C
    • IS is useful to this increasingly Shaky Old Guy
    • Zooms are flexible where "sneaker zoom" is less practical
    • Add a silent shutter body to the kit for shooting in quiet situations
    • Retain three fixed focal length AF Sigma DN Art lenses "just in case"
       
  • Three fixed focal length lens system for use on Full Frame
    • Emulate my old Large Format film approach in digital
    • Use a tripod (where possible) to keep Old Shaky at bay
    • Shoot at low ISO and with wide angle lenses at small-ish apertures

With one part of the Great Rationalization being to emulate my old Large Format kit, I settled on equivalent focal lengths. Why?  Because my most successful images for 30 years or more were with that set of focal lengths.  

Late in my Film Phase I replaced the 90mm and 150mm lenses with a single Schneider 110mm XL f/5.6.  That works out to be 30mm on Full Frame digital, but no one makes one of these.  So I returned to my earlier kit.  Besides, I've gotten used to shooting 24mm on the digital short end of things.

I settled on a 24mm/35mm/55mm three lens kit.

Why not consider other lenses than the ones I ended up with?

To begin with, I don't need fast wide angle lenses.  For the kinds of images I make I've been using f/4 as the widest aperture for years (Sony Zeiss 16-70mm f/4 SEL OSS).  It's been more than sufficient. Fast Sigma 24mm and 35mm f/2 or larger opening lenses would be wasted on me.  

The 50mm Sony f/1.8 tests OK, but I wanted something with stellar performance wide open.  I was looking for something better than the beautiful Nikon Nikkor 50mm f/1.4 AiS I once owned.  If I wanted the old/early Nikkor-S "look", I'm keeping one "just in case."  Further, I couldn't afford any of the faster 50mm Sony or Zeiss offerings. 

Another lens I considered was the Sigma 65mm f/2.  I nearly bought one, but missed the opportunity by an hour.  The next opportunity that came up was a 55mm I ended up buying.  I'm sure the Sigma is a really nice, if not a little heavier than the 55mm.  I rationalize not getting the 65mm Sigma by saying I never fell in love with 240mm on 4x5, which is that equivalent in digital.

What about zoom lenses for Full Frame Sony?

There are a brace of new and potentially interesting lenses coming on the market these days.  I'm thinking of Sony and Tamron's new 20-50mm/20-70mm lenses.  And there are of course many zoom lenses in f/4 or f/2.8 that have been around for a long time.  What keeps me from getting serious about any of them are two things.

First, I have two zooms for APS-C that serve me very well.  I have a wonderful 16-70mm ZA f/4 OSS.  Its rendition is better than some fixed focal length lenses I've used over the years.  And I have a 70-350mm OSS G-Master that I use for motorsports and birding.  It's a nice, sharp, and very useful.  The only downside is the rather obvious pin-cushion distortion that requires correction in processing.

Second, there's no competition when it comes to size, weight, and ease of use.  Have you ever picked up a 24-70mm f/2.8 zoom from any manufacturer and hefted it while holding a little fixed focal length "pancake" lens in the other?  If you have, you'll know the other reason I don't consider Full Frame zooms.  I might change my mind someday.  But for now, No Way Jose.

Here's how it settled out -

Sigma 24mm DG DN f/3.5 

I found one at a very good price out of Germany.  It's used but came in LN condition.

Why this lens?

  • This matches the perspective of my old 90mm Schneider Angulon f/6.8 and Super Angulon f/5.6 lenses on 4x5 inch film
  • I don't use narrow depth of field when shooting wide angle
  • This is quite nearly a "Flat Field" optic
  • It's scream'n sharp from wide open clear across the field
  • It's light and small

 

Sigma 24mm f/3.5 DG DN on Sony A7


Sony Zeiss 35mm f/2.8

This one I picked up from a guy here in Paris.  He'd used it to shoot a little fashion video, but the focal length wasn't what he liked, so it reportedly sat unused for several years.

Why this lens?

  • This matches the perspective of my old 135mm and 150mm lenses on 4x5 inch film
  • I don't use narrow depth of field when shooting wide angle
  • This is quite nearly a "Flat Field" optic
  • It's scream'n sharp from wide open clear across the field
  • It's super small and super light (I'll call it "pancake"-like)
  • The resolution is a nice "meaty" Zeiss sense of "fat" sharpness

 

Sony 35mm f/2.8 ZA on A7


Sony Zeiss 55mm f/1.8

I bought this lens from a guy here in Paris who has moved on to the new Sony 50mm f/1.2 FE. 

Why this lens?

  • This matched the perspective of my much loved 210mm Schneider Symmar-S MC f/5.6 that I used for decades on 4x5 film
  • I do use narrow depths of field sometimes when using longer focal lengths
  • Out of Focus rendition is glorious
  • It's scream'n sharp from wide open clear across the field
  • This is a substantial lens that fits nicely on the Full Frame body
  • The resolution is a nice "meaty" Zeiss sense of "fat" sharpness

From the first "click" and "pixel peep" I was smitten with this older, slower "standard" focal length lens.

 

Sony 55mm f1.8 ZA on A7

 

When I say that the Zeiss labeled Sony lenses have a resolution that is a nice "meaty" Zeiss sense of "fat" sharpness, I know I'm opening a Big-'Ol-Can-O-Worms.  

All I can say is that after staring at lens charts, comparisons, my images by "Pixel Peeping", etc, that after awhile I've had this sense of how lenses make resolution.  I can't fully describe it, except to say that some lenses feel "thin" and others feel "thick", "fat", "meaty."

In my case, Sigma lenses are brilliantly sharp for such wonderfully small money.  Except, they sometimes feel "thin" in terms of resolution/sharpness.  I wish I could quantify it, but I've yet to find a way to do that.

By comparison, Nikon Nikkor manual focus lenses, and now these Sony Zeiss feel "meaty", "fat" in terms of resolution/sharpness.  I know.  I know.  It's strange to use these terms and phrases. What can I do?

Can I see this in practical work?  Normally no.  Not really.  By the time I get to actually making a photograph, image lighting, composition, and processing become more important than chasing "Pixie Dust."  Knowing what I think I know, sure, I think I can see something if I look really really closely.  I fear it might be Confirmation Bias or Wishful Thinking or Hopium that skews my looking so I try not to put too much into it.

There it is, in all its glory, and paid in full, absolute truth in advertising, my Great Rationalization effort.  

In a future post I'll perhaps share a few results that show I'm really enjoying working with these new tools.

Friday, February 17, 2023

Falling into another Wabbit Hole... [part Four]

I've gone from the Already Known, to the Mildly Interesting, and onto the Absurdly Ridiculous. With this entry I run headlong into the Certifiably Insane.

I'll make one last pass at image up-sizing.  This time I'm going to try and quadruple the area of an image by going from 24mpixel to 108MegaOutragiousHolyMolyBatmanDearGawdAlrightyPixels.  In the process, my poor 'ol Linux image processing 'puter will be super stressed and might just eel over, paws up.

The process of upsizing will be done in steps and by hand so that each stage is known and understood.  That way if I want to modify or substitute something different at any of the steps I'll have all the Points of Reference needed to properly evaluate an outcome.

This is in contrast to software providers who's products pretty much hide what's going on behind the marketing hype of "AI."  I can't for the life of me understand how these kind of software are "AI" driven, if they mean by AI Artificial Intelligence.  Where is the Promised Land of Machine Learning when the current state of what's called "AI" gives wrong answers?  Seems like a willfully stupid machine if you ask me and "AI" could actually stand for "Absurdly Ignorant" for all I know.

OK.  I'll step off the soapbox... um... yes... photography... make big beautiful picture bigger.  That's it.  I'm back, now.  Ack! what a detour.


Retrombile ~ 2023

 A photograph of simply the most
beautiful Fezzaz racecars ever constructed.
Captured using a Sony A7 and Sony 35mm f/2.8 ZA.
On a tripod, of course.  How else do you get
something like this ridiculously sharp in low
light at ISO100 and the lens set at f/8?

Multi-image-file averaging (3 file image stacking), even at low ISO, produces for what I've seen so far coming out of my nearly 10 year old Sony A7 Full Frame cameras Super Clean Noiseless files.  Applying a bit more sharpening on a 1.5x linear upsize returns cleaner files at 9000 pixels long than I ever ever got out of my old Canon 5D MkII DSLR at base 5616 pixel image size.  

From what I've experienced, sensor technologies have come a long ways and I feel Sony is currently in the lead.  Since Nikon used/uses Sony sensor foundries for their light capture devices, I'll include Nikon in a sub-heading to Sony's sagesse in this area of design and manufacturing.  In fairness, though, I read somewhere recently that Canon may have caught up with Sony with Canon's latest iterations of the R-series Full Frame cameras.  So don't take my word for anything and test for yourself.

 

Setup ~

Again, using Human Intelligence (HI!!!) to capture three images to stack as a noise reduction and file upsize exercise -

  • Sony A7
    • ISO100
    • 2second timer
    • Back-button focus (to maintain image alignment - focus once, shoot three)
       
  • Sigma 24mm f/3.5 DG DN
    • Set to f/8
       
  • Manfrotto tripod
  • Rawtherapee
    • Curves setting
    • Capture Sharpen
    • Noise Reduction
    • Kodak film emulation
  • Gimp
    • 3 image stacking in layers
    • Opacity setting of layers
    • Scale Image NoHalo image upsize operator - sample size 1200DPI (this is important)
    • Additional sharpening application (various G'Mic integrated operators)

To reiterate, the file upsize filter set to either 600DPI or in my case 1200DPI is important because it sets the number of "slices" of information per inch that the upsize will take during the operation.  Regarding the upsize algorithm, I've previously written about the NoHalo.  I won't cover that again here, except to note it's much better than anything else I've to this point tried.


Image Upsize Comparison Base Image

 

Comparison

[As always, click on the following image and inspect it at 100percent to see whatever there might be to see]

For this exercise I will consider the final set of rows in the following image. They are the ones upsized to 12000 pixels on the long dimension.

 

 Image upsize comparison ~ various techniques

Comments ~

Keep firmly in mind that I'm taking a 24mpixel image and upsizing it to an effective 108mpixel.  I'm _not_ adding any information that's not already in the original base image.  I'm simply taping and mudding over the empty spaces in what I feel is the best manner possible to make things appear sharp and acceptably printable.
 
You can see the basic upsized three image non-noise reduced averaged sample is smooth across the field.  There is detail where we want it and nothing looks too pixelated.  If you look hard enough you will find a wee-bit-o-pixelation, but it's certainly not "bad" by any means.

Sharpening the non-noise-reduced upsize reveals more pixelation.  This is offset by increased local contrast.  Depending on the printer we might only see the pixelation on very close examination.

I did a second 108mpixel upsize pass, this time using the noise reduced files to stack and then resize.  I think I'd go crazier than I already am trying to find significant differences between the two image stack approaches.  Noise reduction or noise cancelling simply works.

OK.  OK.  If I squint, stand on my left leg, curl my right pinky, and howl at the Full Moon when it's in conjunction with Saturn maybe I can see just a hint of more smoothness out of the noise-reduced three image stack upsize.  But as I said, any differences are really quite insignificant.

Just how Gud(tm) all this is can easily be seen when comparing these 108mpixel upsized works against the well processed and very lovely 24mpixel image viewed at 200 percent.  This is seen in the very last row in the comparison. 
 
In terms of print size while retaining as much "image quality" as possible, 300DPI prints from this 108mpixel file reach 40inches.  Returning to one of Thom's thoughts on print sizes and DPI, and using his lower end of 188DPI (I can't find the original article, but he mentions somewhere else that 188DPI is very printable on Epson), we get a good resolution 64inch long print.  Whew! 
 
If you're interested, ask me sometime how we used to print 40x60inch from 35mm negatives when I worked at Samy Cameras Crossroads to the World print lab.  I have a few thoughts on the Current State of Things, print sizes, and viewing.

As a side note, this is as big a file as my little Linux laptop can handle.  I have to close everything but the Gimp when performing the upsize and sharpening operations.  Nothing can be in memory.  All non-essential processes have to be closed for this thing to successfully run the Richardson Lucy G'Mic sharpen algorithm.  If there's a leftover process running somewhere in the background, 108mpixel is too much of a burden and the Gimp gracefully, but firmly closes from an Indeterminate State.

... hmmm... I've been thinking about a new HP 4k UHD i7Core 32gb RAM laptop... maybe I have good reason to pick one up...?   Nah... but perhaps...
 
Yes.  This has been fun. It really has.  But this is it.  I swear.  I'm finished.  No more upsizing.  No more Wonderland.  No more Dead Pope Society stuff.  No more Del Orto tuning. Nutt'n.  Done.  Put a fork in me.  I'm cooked.

Thursday, February 16, 2023

Flickr account activities for 2022...

 I've had 25 images that were Explored in 2022.  I find that pretty amazing.  Yes, I realize that "Explore" on Flickr is some kind of automated image selector.


My photos on Explore in 2022.
 

 

And the most Fav'd image of 2022 is... 

My most popular photo of 2022.

 

My Flickr entry point remains here.

What I find interesting is that both of these images are in black and white.  The vast bulk of what I post to Flickr is in color.  Hmmm...

Monday, February 13, 2023

Falling into another Wabbit Hole... [part Three]

I guess I could explain myself so that you can understand, from a certain perspective, what This Is All About.

I've always been a little bit odd.

In the chess club in school I met other geeks and dorks and social stumblers.  We used to be amazed at how easily the "In Crowd" seemed to fit "In" and "Be Someone" (important).  Not us.  None of us were gifted in that way.  

My family still talks about the mold I'd grow on bread so that I could look at the growth under a microscope.  They talk about the pond scum I'd bring home in a jar where I'd hope to find (again under a microscope) various bacteria swimming around and behaving in interesting ways.  And they still laugh about the time I went to a pier down in the harbor to collect sea squirts.  There are certain varieties that are bio-luminescent and I wanted to learn what spectrum of light they emitted.  This and much much more before I was 19 years old.

 

30 years old silver prints - scanned

The bike I learned about desmodromic
valve actuation on.  It's a very pretty
little 500cc Ducati Pantah.

Friend's parents who worked at JPL designed, built, then shot the first deep space research vehicles into the sky.  Many of these early spacecraft are still communicating with their command centers here on Earth.  Others parents filmed the first successful mass circulation surfing and motorcycle movies and who ended up with a rather impressive collection of Humber automobiles.  My father helped build the LEM that brought men home from the moon.  My friends from that time still talk about how it was.  Everyone was "doing big stuff."

Later and on a smaller scale, the colleagues I got along with best at work were the ones who "did stuff."

One guy in manufacturing repaired mechanical wrist watches.  Another in the machining department built motorcycle parts from scratch.  Someone else in Q&A raised horses.  A researcher and his wife collected Alfa Romeos and Ferraris and worked to maintain them together.  Late in my career I worked with a guy in service who'd written an important paper on vacuum tube two and three pole characteristics. The music he quietly played on a monaural vinyl record player-based system he'd built was better than any CD stereo I'd ever listened to.

Me?  Well, I had my cheap yet somewhat exotic cars (by American standards). I used to run them up and down Sunset Blvd when I lived and worked in LA.  Later, I had motorcycles and took the opportunity to appreciate needle/roller bearing bottom end crankshafts and desmodromic valve trains.  Fiats, Jaguars, and Ducatis were part of this "slightly odd and out of place" world I lived in.  And here I was thinking it was just "cool stuff." 

 

Scanned Silver Print - 1970's

The 1964 Jaguar E-type FHC that I
learned how to tune 2inch SU's on.

 

Need a new timing belt on your little X1/9?  Easy peasy.  Want to replace an 860GT crank with the 900SS version?  I can help.  Want to tune Del Orto carbs on either desmo or spring-valve Ducatis?  Ring me up.  Want to properly tune early Jag E-type triple 2 inch SU's (3.8 litre or 4.2, your choice)?  You know where to find me.

When it comes to photography, I do all this wacky, crazy, weird, dorky, geeky, bizarre stuff simply because I'm interested in it.  If I have a question, I try to find ways of finding an answer.

The only person any of this matters to is me.  I share this stuff just in case there's someone interested in this, too.  It's simply that.  I'm a geek.  I'm a dork.  I'm a retired engineer.  I don't always get it "right", but I try.

So... onward... with one more blog entry... likely to be followed by a few more in the near future... and so on...

This Geeky Dork (me) fell into one Waskuwy Wabbit hole labeled "Fat Pixels", kept going and fell into another Waskuwy Wabbit hole labeled "Enhanced Image Quality" and *thud* now find myself sitting on the ground in another Wonderland of Wacky Nerd Heavenly Dorkiness.

To briefly review how This Nerd got here, I started by wondering if the 12mpixel 9micron sensor site "Fat Pixel" Sony A7S had better image quality than my current puny 5micron sensor site 24mpixel A7.  Finding the answer is "no" (the A7 has better color depth and dynamic range), I turned to seeing just how good I could make a basic single scene A7 image.  

My chosen approach was to reduce noise and increase color accuracy by stacking 3 correctly registered images.  Stacking 3 images eliminates low level often difficult to see color variations in smooth areas of a scene.  Not that many people would notice it, that's how difficult it is to see, but I did see it and find that the image stack exercise does work.  Do photographers "need" to do this for image in general use? Certainly not! and absolutely not!  The Sony A7 produces gorgeous images as is. No need to Tinker Around. 

 

Retromobile ~ 2023

Random image made up of three
images, stacked, to reduce noise to
practically zero.

... but... now that I've arrived at an even more gorgeous stacked zero noise image, what's next?  Continued nerdiness/dorkiness/geekiness, of course.

Looking closely at the Capture Sharpened, noise reduced, 3 image stacked  composite got me to thinking.  If there really _is_ more usable information in this file than, say, in a base single shot low ISO Sony A7 image like I think there is, maybe we can do something really "cool" with it by using Human Intelligence (as opposed to the new over-used marketing phrase "Artificial Intelligence") and upsize it?  I've got time on my hands, so why not find out?

Setup ~

Using Human Intelligence (HI!!!) to capture three images to stack as a noise reduction and file upsize exercise -

  • Sony A7
    • ISO100
    • 2second timer
    • Back-button focus (to maintain image alignment - focus once, shoot three)
       
  • Sigma 24mm f/3.5 DG DN
    • Set to f/8
       
  • Manfrotto tripod
  • Rawtherapee
    • Curves setting
    • Capture Sharpen
    • Noise Reduction
    • Kodak film emulation
  • Gimp
    • 3 image stacking in layers
    • Opacity setting of layers
    • Scale Image NoHalo image upsize operator - sample size 1200DPI (this is important)
    • Additional sharpening application (various G'Mic integrated operators)

The file upsize filter set to either 600DPI or in my case 1200DPI is important because it sets the number of "slices" of information per inch that the upsize will take during the operation.  I'm sorry.  I know that's difficult to read.  Um...

Let's try a more practical approach.  Using any image processing software that has an Image Size operator, set the DPI to 300 or less and see that the final output image looks distinctly "blocky."  The edges of things are pixelated.  The smooth areas aren't.  It looks like a low-rez monitor display from the 1980's.

Now set the filter DPI to 600 or 1200 and observe how much smoother the image is.

Some software that I've used (here's to looking at you, Capture One and Adobe) lets you set a filter slice frequency, and then completely ignore it.  What I've gotten out of those softwares is low-rez blockyness.  In short, a mess.  So it's worth trying to sort things out, particularly if the upsized image quality isn't what you feel it should be.  I'm sure I made a mistake somewhere in all that, but I can't find where.  Which is why Gimp is my image up-size software of choice.  The various image manipulation parameters are properly implemented in the Gimp.

I've previously written about the NoHalo algorithm.  I won't cover that again here, except to note it's much better than anything else I've ever tried.


Image Upsize Comparison Base Image

 

Comparison

[As always, click on the following image and inspect it at 100percent to see whatever there might be to see]

For this exercise I will consider the second set of 4 rows in the following image. They are the ones upsized to 9000 pixels on the long dimension.

 

 Image upsize comparison ~ various techniques

Comments ~

Keep in mind that I'm taking a 24mpixel image and upsizing it to 54mpixel.  I'm not adding any information that's not already in the original base image.  I'm simply spackeling the empty spaces in what I feel is the best manner possible to make things appear nice and sharp.  It's what recent "AI" upsize software applications attempt to do.  Using "HI" (Human Intelligence") I feel I can understand, and therefore hopefully control what's going on.
 
From the fourth row down, the Rawtherapee Capture Sharpened, noise reduced, Gimp NoHalo upsize to 9000 pixels on the long dimension looks pretty darned nice.  It's perhaps a little "soft" in terms of contrast compared with the original 6000 pixel long base image.  This seems like a good place to start.

Row five is the stacked image with G'Mic Diffusion Sharpening.  We can see a little pixelation along certain high contrast edges of the image.  While I'm sure that it'll print nicely, I wanted to see if I could do a little bit better than this.

Row six shows my 1pixel wide Unsharp Mask attempt to make things a little less pixelated.  Et voila! we have a not 1/2 bad result.  There's a little "ringing" from using this filter (halos) in certain places, but these might be difficult to see in a final print.  I like it.

Then, on a lark, and knowing that humans respond more positively to local contrast than they do to absolute resolution, I took a harsh G'Mic sharpening operator, put this Richardson Lucy output to an upper layer, and set the opacity to 70percent.  At 100percent opacity the Richardson sharpening is, to my eyes, ridiculously harsh.  So I backed the opacity off a bit on the sharpened layer and that's what we see here.

What does all this add up to?  Well, if we read one of Thom's articles on print sizes and DPI, it's easy to see that a 9000pixel image would print perfectly to 30inches at 300DPI.  If you use his lower end of 188DPI (I can't find the original article, but he mentions somewhere else that 188DPI is very printable on Epson), we get a nicely viewable 48inch long print.
 
Big enough?  Job done, then.
 
That will do for now.  That will do.

In part Four I will attempt to do Something Ridiculous and try and max-out my poor 'ol Linux image processing computer.

Saturday, February 11, 2023

Falling into another Wabbit Hole.. [part Two]

After falling into a Waskuwy Wabbit hole labeled "Fat Pixels", I realized under the name of the hole was scrawled by a rough hand "Enhanced Image Quality."  At least I had a rough idea of where this time I was headed.  It pays to read such scribbles carefully.

Image quality interests me because of my large format film and photo-print-lab experiences.  I shot a lot of 4x5inch film (many many 100sheet/each boxes), some 5x7, a lot of 8x10 (many 25sheet/each boxes), a little 7x17, and a bit of 12x20 (whew! that stuff was expensive, even back in the day).  

On the smaller side, I pulled hundreds and hundreds of rolls of 120 through Rolleiflex, Mamiya 7, and, for a very short time, Hasselblad cameras. Prints that I made from that time in the lab and at home are deep and rich.  It goes without saying that seldom was there ever a problem with resolution/sharpness nor image quality, once I aligned the negative carrier and made sure the negative flattened under the heat of the condenser lens.

Moving to digital at first felt like I was giving up a lot, and I was.  The Canon DSLR gear I had was simply absolutely awful.  I didn't fully appreciate it at the time, but this early digital gear was quite soft and all I knew was "that's just the way things were."  It was State of the Art for the time.  Today when I try to do anything with those early images I remember there's nothing to be done, except to keep the final image size very very very small.

All this changed when, on a whim, I picked up a brand new Sony A6000.  I instantly had a little photographic device that was sharp straight off the sensor.  Good prints up to 20x30inches were no longer a digital challenge.  In fact, the little APS-C A6000 image quality felt to me a lot like large format film.  

It didn't entirely surprise me when Mike Johnson wrote something equating the small 1 inch sensor with medium format film output.  He said "...while there are differences, 1" sensors are now virtually as good as medium-format film was in 1991."  Yikes!  Digital can be (insert Elmer Fudd voiceover) Weely Weely Gud!

Taking this a Small Step Further For Mankind, in using Sony A7 Full Frame bodies  there are times when I feel the image quality exceeds that of some of my old 8x10inch contact prints.  Pushing 600 linear pixels per inch to print can be pretty darned beautiful.

So here's a thought that I have.  And I've had this thought for some time, now.  In a fair approximation, digital sensor output to film equivalents (in terms of image quality) -

  • 1inch sensor == medium format film
  • APS-C sensor == 4x5 inch film
  • Full Frame sensor == 8x10 inch film

As a purely intellectual exercise I wondered if there were ways I could extend old A7 24mpixel performance to rival that of my old 12x20inch contact prints.  Tall order, eh?  Well, the scribble on that "Fat Pixel" sign pointing to the Wabbit hole did mention "Enhanced Image Quality", right?

There are several ways of going about exploring "Enhanced Image Quality."  I can stitch a large photo from several smaller sections.  I've done this and it works nicely.  Or I could work from a single scene image.  Or I could stop being cheap and simply buy a camera with more megapixels.  Nah.  Too expensive for this Old Fart who lives on a Fixed Income.  So single scene image it shall be.  Just to see.  If it works?  Yea!  If it fails?  I still have very beautiful 24mpixel images from the A7 cameras.

Borrowing from astro-photography work I considered the single scene approach in two ways.  One was noise reduction as a software function and the second was noise reduction by image stacking.  Noise reduction by software is easy.  Just hit the button and move a couple sliders until things Look Gud(tm).  

Image stacking for noise reduction takes a little more time and care.  It is, how shall we say, and "interesting exercise."  Particularly while shooting right in the middle of an enormous major-event level classic car show.  Which I did.  At Retromobile.  In 2023.  With tens of thousands of people milling around.

Ah, the Silly Things I do for small gains in obscure, arcane knowledge...

Setup

To capture three images to stack as a noise reduction exercise -

  • Sony A7
    • ISO100
    • 2second timer
    • Back-button focus (to maintain image alignment - focus once, shoot three)
       
  • Sigma 24mm f/3.5 DG DN
    • Set to f/8
       
  • Manfrotto tripod
  • Rawtherapee
    • Curves setting
    • Capture Sharpen
    • Noise Reduction
    • Kodak film emulation
  • Gimp
    • 3 image stacking in layers
    • Opacity setting of layers
    • Additional sharpening application (various G'Mic integrated operators)

Image Upsize Comparison Base Image

 

Comparison

[As always, click on the following image and inspect it at 100percent to see whatever there might be to see]

For this exercise I will look at the first 4 rows in the following image only. These represent a simple base Capture Sharpened image version, Capture Sharpen with Noise Reduction,  Capture Sharped with Noise Reduction and 3 image stacking, and then all this with G'Mic Diffusion Sharpening.

 Image upsize comparison ~ various techniques

Comments

Starting with the Sony A7 Rawtherapee Capture Sharpened image I see a low noise, beautiful, sharp image.  There is certainly nothing to complain about.  And yet, if I stare for long while with the image displayed at 200 or 400 percent there is just a hint of color variance in the smooth areas.

Which brings me to the second row.  Now the image has the lightest possible Rawtherapee luminance noise reduction applied to it.  The smooth areas are now very smooth.  The hard to see color variance is now gone.  It's replaced by a very very hard to see pattern of monochrome noise.  I could stop right here and call it a day.  Job done.  Time to move on.  It's Fizz Time (as in a decent French crémant or Italian prosecco).

But I'm after Deep Knowledge and Understanding, right?  So I continued down this Wabbit hole with a Gimp processed (Rawtherapee does not allow the use of layers) three image, Capture Sharpened, noise reduced, layer stacked photo.  Since I used a tripod I could simply open three different images as a stack and set the opacities of the upper two layers.  The first and top-most layer opacity at 25%, and the second layer opacity at 50%.  I then re-confirmed the images were correctly aligned by looking at the stack at 200% or 400%.

Not only do I now have a very very smooth rendition from the noise reduced image stack, but I feel there is more color information.  The image is becoming deeper and richer than the base A7 single image.  There's "something" in here that is nearly compelling from an image quality perspective.

There's yet one more step to take.

With the additional information in the three image stacked photo has come a slight loss of micro-contrast.  The image doesn't have the single image level of "crunchiness" that comes from the very subtle color variations that the basic sensor gives when run through Rawtherapee's Capture Sharpen function.

To counter this I used the Gimp integrated G'Mic Inverse Diffusion sharpening tool.  It's a light, subtle, and very effective way of increasing local contrast in a controllable manner.  For many things where I want just a bit more "snap" I often turn to just this operator.

Now we're getting somewhere!  

This is a glorious image.  Well, to me at least.  The image quality is out-freak'n-standing.  This will print very very well to just about any size I... um... hold on... what's that?... oh no... not another sign pointing to yet another Wabbit hole...  geez... when will this Wonderland of Wackiness ever end...?

Stay tuned for part Three to find out.