Saturday, March 25, 2023

Ai poses an enormous problem ~ it's much more than image scanning and Copyright issues

The question of Copyright and AI scanners can thundering home recently when I learned that my Flickr images have been included in at least two major AI datasets. So, I submitted the following Help Request to - 

Dear Sirs, 

Using tools available to see if my images have been consumed by "AI" scanners I see that, yes, indeed, my Flickr images have been scanned. 

I am very aware of Copyright and it's limitations. I used to advise legal council at the companies I worked for in technology. I did NOT give anyone the right to scan and then use my works. 

I hold copyrights to all but one of my images, and I believe there may be a serious issue by allowing "AI" scanners to do what they are doing. 

What is Flickr's position on this? 

La grève des éboueurs ~ Paris 21 March 2023


Indeed, among may other tasks, I helped advise the lawyers at the last company I worked for.  I'm fairly familiar with what is and is not allowed, and where the gray areas are in copyright law and its application.  Perhaps Flickr had developed a position on the topic?

Their reply, from what I can tell, is likely a lawyer reviewed/sanitized/canned message that gets sent to everyone who asks about AI -

"Hi there,

Thank you for your input on this matter.

As this is a new and emerging space, we have not yet fully reviewed how Flickr will fit with AI images and photography. 

At this time, there have not been any changes to the Copyright Act to address AI generated images.

It is something we are reviewing closely and should there be any changes that would affect the whole platform, we will certainly notify our members.

As always, if there are instances where a creator identifies their copyright or license has taken place, they can submit a claim through our DMCA process for review and actions here.


We're updating the terms of Flickr's free accounts to strengthen our community and the long-term stability of Flickr. Read more here.

La grève des éboueurs ~ Paris 21 March 2023


First, I fail to see where there is something unique about AI that could require legal redefinition/reapplication.  Secondly, Flickr recently announced a new area of AI generated images where people can post the output of AI software.

It feels as if Flickr is giving this a rather big "pass."

Class Action lawsuits have been filed and I wondered how to join. One site I visited indicated that _normally_ 

" don't need to do anything to "join" a class action. If your legal rights are affected by a class action, you usually will only need to get involved once the case settles. In most cases, you will need to submit a claim, either online or through the mail, to receive your portion of the settlement or judgment...

Digging into Flickr's past I found that four years ago it was known that Flickr had been scrapped by an AI in a project that involved IBM. 

"...“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz..." 

One more thing. There was an article published back in 2018 on scrapping Flickr for "deep learning experiments", complete with code to implement a scrapper where it was noted that - 

"...1. It is not legal and ethical to scrape some websites 

2. Even if you were not concerned about the law or ethics, scrapping at scale can be challenging particularly if there are safeguards in place to discourage scrapping...

La grève des éboueurs ~ Paris 15 March 2023


Which begs the question: Did Flickr allow scrappers/scanners, or did they just not care and let it happen?  Afterall, they'd published an API that people could with as they pleased.

Any way I look at it I'm wondering if the benefit stills photographers in the "Pro" program _pay_ for out weighs the huge issue of creative copyrighted works  AI poses and has posed for several years. Flickr may on some level be culpable.

I follow a number of tech websites to try and keep current with the State of Things in science and technology.  Something very concerning turned up just the other day (March 24, 2023 to be exact).

After listening to the podcast I came to the understanding and realization that the problems posed by AI far out-strip the scanning of our images.  

AI is out of control and, if not contained, might pose serious risks to us. As this fire is building to disturbingly large proportions, Microsoft has laid off key personnel from their AI Ethics team.  Isn't _now_ the time to staff-up ethics teams on AI?  What the h*ll is going on?

Ack!  Something very important is happening right this moment and I've not been following the ethical and moral implications of AI as closely as I could have.

If you're at all interested in this topic, have a listen. 


La grève des éboueurs ~ Paris 21 March 2023

Yes,the images used to illustrate this blog entry
were carefully chosen to set the mood

Wednesday, March 22, 2023

Activites for 2023

As I have sometimes done in the past, here is a list of potentially fun car and motorcycle related things to do in 2023 should you find yourself around Paris, Rance.

April 7 thru 9 - Grand Prix de France Historique, Paul Ricard

April 16 thru 22 - Tour Auto - Zagato specialty year, Paris

May 13 thru 14 - God Save The Car, Montlhery

June 2 thru 8 - Rallye des Princesses, Paris

June 17 thru 18 - Cafe Racer Montlhery

June 29 thru July 2 - le Mans Classic (I'll likely not attend this year as I went in 2022)

July 14 thru 16 - European le Mans series, Paul Ricard


Completed Events -

March 21 - La gréve des éboueurs (garbage haulers strike) on-going

 La grève des éboueurs ~ Paris 17 March 2023

 As of 22 March the strike
appears to be continuing

March 15 - Manifestation "...450.000 personnes ont défilé dans les rues de la capitale..." (protest march) through Montparnasse 

15 March manifestation ~ Paris

It was incredible to see hundreds of
thousands of people take to the streets
in protest


February 24 - March 3 - Salon International de l'Agriculture (Flickr album still updating as of 8 March)

Salon International de l'Agriculture ~ 2023

Beautiful animals, one and all


February 1 thru 5 - Retromobile (Flickr album still updating as of 17 February)

 Retromobile ~ 2023

Unexpected Renaissance Painting???


January 15 - la traversee de Paris (Flickr album)

la traversee de Paris ~ 2023


With new photographic tools to play with, perhaps I'll get a decent shot or two and improve my output?  Regardless, I'll be looking to have as good a time as possible.

See you there!

Vintage Revival Montlhery ~ 2022

Sunday, March 12, 2023

One last Wabbit Hole ~ and it's a whopping BIG one, too!

Accidental Renaissance ~ Retromobile 2023
A little "accidental Renaissance"
to keep things interesting in prep for
reading through somewhat dense material.


[References section updated twice ~ 4 March, and 3 March, 2023]

After whinging and whining about the lack of verifiable, accurate knowledge, I find myself at the bottom of an Enormously Vasty Wabbit Hole.  At the top and just as I fell in I saw it was labeled  "Knowledge You Are Looking For."

The areas I wanted to learn more about but was having a Devil of A Time finding anything useful/correct/truthful included dynamic range, sensor noise, and color depth. I was still poking at the things that might go into defining what a "Fat Pixel" camera might be made up of.

Apologies first: For this blog entry I'm going to move very fast and rather deep.

There are nuances and details that can and should be applied to each of the following statements.  Each step can be an entire study unto itself.  As always, don't trust me, but if you must, make sure you verify.

Here is my present state of understanding.  

Sensor Basics ~

~ There is an analog input side of every sensor ending at analog to digital converters (ADC) with the following components -

  • Light sensitive photo site
  • Analog amplifier
  • ADC

~ There is a digital output side of every sensor starting at the ADC with the following components -

  • Digital image processing chips for stills and video
  • Digital image processing software for stills and video

~ ISO controls the gain on the _analog_ amplifier that feeds the ADC

~ RAW files are made up of the digitized data spit out of the ADC

~ jpg files are the result of the ADC output _plus_ whatever massaging the manufacturer applies on the digital circuitry in-camera

~ In-camera jpg processing _may_ further increase gain digitally (as a second gain function)

Dynamic Range ~

This is the EV difference between highlight areas with detail and shadow areas with detail.  Sensor noise reduces dynamic range.  Said another way, the quieter the sensor the greater the dynamic range.

In general...

~ The broadest dynamic range is seen at the Base ISO

~ The higher the gain on the analog amplifier (increasing ISO) the lower the dynamic range

Sources of noise ~

~ The analog amplifier (where signal gain is first applied) that feeds the ADC and ADC itself are the primary sources of noise in RAW files (see Canon DSLRs)

~ In-camera jpg processing digital circuit may further increase gain and thereby add noise downstream from the analog circuit.  In general, if there is noise on the output of the ADC, unless there is noise reduction on the digital side, we will see noise out of the digital circuits, too (see Canon DSLRs).

Sensors ~ ISO Variant

These are the traditional CMOS sensors that we've all come to know and love.  Until very recently, these were the only ones commercially available to us.

~ The electron gathering well at each sensor site has these properties -

  • Traditional big and somewhat shallow electron gathering well design
  • When used at low ISO settings (ie: low analog amplifier voltages) we get -
    • Broad dynamic range
    • Slight image to image variations (subtle noise, if you will)
    • Lowest noise levels tells us what the Base ISO is
  • As ISO increases -
    • Dynamic range decreases
    • Noise increases... maybe... (see astro-photography reference video below about cases where this is _not_ entirely true)

 Recent Sony Design Enhancements ~ ISO invariance

Sensors starting with A6300, A7RIII, A7III, and very obviously the A7SIII, as well as the Sony manufactured MF sensors used in Fuji GFX cameras have _two_ electron gathering wells per photo-sensor site.

~ First electron gathering well at each sensor site has these properties -

  • Traditional big and somewhat shallow well design
  • Used at low ISO settings
  • Broad dynamic range
  • Slight image to image variations (subtle noise, if you will)
  • Lowest noise again tells us what the Base ISO is

~ Second electron gathering well at each sensor site has the following properties -

  • Narrow but deep well design
  • Used for higher ISO settings
  • Shows reduced dynamic range
  • BUT it shows _less_ noise than the big, shallow electron well
  • Highest dynamic range when using this well-type sets a Second Base ISO

Interesting property: ISO invariant sensors tend to show no change in noise as ISO is increased after the Second Base ISO has been switched on.

Note: There are examples of sensors that are ISO invariant from low ISO, such as the Nikon D750 where noise levels do not change from ISO200 on up. See the astro-photography reference video below for the Nikon example.

Comments ~

 Taken as a whole I feel I can begin to understand a few seemingly unrelated things.  Such as -

  • Early Kodak CCD MF sensors showed the best image quality at the time.  This, even if they _expanded_/_amplified_ the ADC output from 14bits to 16bits.  At least they started out at 14bits when everyone else was down around 10bits of RAW and/or stuck at 8bits jpg.  I can see where the "Fat Pixel" ideas could stem from in terms of perceived image quality.  What I've learned is that modern sensors can easily outperform the original Kodak MF sensors in every measurable way, _and_ we get ISO flexibility with current sensors where Kodak worked best at Base ISO, period.
  • I now see (on Photons to Photos - see reference link below) the differences between the old 14bit ADC Canon 5D MkII and 7D sensors and the _12bit_ ADC Sony A6000.  In one release of products we went from "ya, that's not bad but if we're not careful there's loads of noise in the shadows" to "wow! now there's a clean image".  The A6000 has more dynamic range at 12bits ADC and, as hoped for, less noise than the older 14bit ADC Canon sensors.  There is likely something about Canon's analog circuit that was rather prone to the introduction of error.  It's interesting to me that it's only with the release of Canon's new mirrorless cameras that their sensors seem to meet or slightly exceed the early Sony FF sensors.  This tells us something about how much work Canon has done on the analog side photo-site to ADC path.
  • Looking (again on Photons to Photos - see reference link below) at first generation Sony A7, A7S, and A7R, there's not really all that much difference between these models in terms of dynamic range and noise.  Yes, the A7S has slightly better dynamic range than the A7, but I'm not sure I'll ever actually see or appreciate the extra 1/3stop EV the A7S has.  I've read where we can begin to see differences of 1EV dynamic range, but I haven't verified that for myself.  Coming back to the Photons to Photos information, the A7R looks nearly as good as the A7S.  In the end, if I can't make a great "Fat Pixel" image with any of these three cameras, I'm doing something drastically and dramatically wrong.
  • ETTR (Expose To The Right) "works", but not for the reasons we're commonly told.  It "works" because we are able to bring the dark areas up out of the base level noise.  Of course we have to guard against saturating the highlights beyond recovery, but once the data has been collected, it's more flexible in processing than images where the dark areas are down in the noise base of the sensor.
  • ETTL (Expose The The Left) "works" as expected.  We use ETTL in black and white photography as a "lazy man's way" of guaranteeing as much detail in the highlights as possible.  This can closely emulate film images when processed appropriately.  The highlights are raised in processing and the shadows raise with the highlights. As an aside, I've started to use Zebras to know when the highlights are saturated.  Coming back to ETTL, with quiet sensors we might be able to avoid distracting noise to the shadow areas.  Which leads me directly to the next item.
  • One of the problems I had trouble understanding why there is so much noise in the shadow areas of severely under-exposed images even at 100ISO where I thought we should see the lowest noise everywhere across a broad dynamic range image.  It turns out the noise is easily explained.  The electron gathering wells aren't able to gather enough information consistently across the wells to make the dark areas appear as smooth as the light areas.  The light simply is not available at those low levels to distribute evenly. Hence noise, even at 100ISO.  To avoid this problem in a single shot, use ETTR or better yet Zebras.  If the dynamic range of the scene is broader than a sensor can handle, there's HDR and image stacking during processing.
  • Potential recent "Fat Pixel" candidates could be the Sony sensor manufactured Fuji MF GFX cameras.  Their dynamic range exceeds by 1EV the best Sony FF.  The MF Fujis are well above my current Pay Grade and I doubt we'll ever see them go for less than 1500USD used.  Yes, I know.  The 50R is trading hands on the used market for around 2250USD.  It's still too rich for me.  But there is something to be said about well-engineered sensor development and ISO invariant circuitry.  Kodak sensors were never ever close to being this good.
  • I now understand why backside illumination of a sensor "works."  By raising the black base to a known level, sensor noise levels are suppressed by starting at zero (pure black) well above the potentially noise inducing analog circuits.  It's a neat trick, actually.  I feel there are some creative solutions being applied to sensor design these days.  This is Fun Stuff.
  • Pursuing the absolute best color depth, longest dynamic range, and lowest noise images possible requires a static subject, a tripod, setting the camera at its Base ISO 1, and shooting three or four identical images.  Three image stacked low ISO photos "work" because the process averages out the subtle fat/shallow electron well variations.  The final output image quality should easily exceed that of, well, just about anything.  And speaking of which...
  • Sony has come up  with something interesting in their dual Base ISO sensors.  While destined for video work, I can see where there will be benefits for us stills shooters, too.  Even if we're stuck at 8EV dynamic range at Base ISO 2, I'm still struck by the possibilities of lower than Base ISO 1 noise.  
It's a Crazy Topsy Turvy Counter-Intuitive World out there and I can't wait to see what the Sensor Development Wizards come up with next.

Until further notice I have stopped reading the thoughts, evaluations, and conclusions of the vast majority of popular still-photography related websites.  In general they are at a distinct lack for meaningful data on sensor performance, dynamic range, color depth, and noise characteristics.  I know they're "trying" the best they can, but I've found that I have a difficult (Gear Grinding) time with the Marginal at Best "test" methods and bad data they attempt to analyze and justify their conclusions with.  It's become just "noise" to me.
Sometimes a person has to dig deeper to get at truth.

References ~ [updated/modified on 4 March (correcting a link to the most important video, the first, and updated slightly on 3 March, 2023]

Here is a video with very clear, concise explanations of Sony sensor behaviors.  

I had to pause the video every other sentence, or so it seemed, so I could stop and think about and think through what was just said.  There is so much _good_ information here.  In fact, _this_ should be the reference for discussion of sensor performance.  I wish more of YouTube was this accurate and informative.  I've found there is too much "squishy thinking" go'n on out there!!!  So this is an absolute breath of fresh air.  

It's easy to see how much this video influenced the organization of my thoughts.  While I knew various pieces and parts of the process, the video helped me organize what I knew into a cohesive whole.  In fact, I've borrowed from the video outline in the writing of this piece because I found it that useful.

Photons to Photos has a great site on measured dynamic range, noise, and sensor performance. Complete with methods and rationale explaining what they do and why. For some reason I've not run into this site until now.  There were many spent hours looking around, reading, and comparing various sensor data.  It's amazing how much a person can learn if they're patient and aware and take a few notes along the way.

Here is a video on ISO invariance and why it's interesting and useful.  There are also comparison images showing how ISO variant and invariant sensors behave differently. 

I often wondered what changed with the introduction of Sony's cameras around 2012.  Reading Jon Rista's explanation from 19 January 2015 - 01:30 PM gives a clue.  While I might not agree that 500nm waffer technology was a problem (except as an example of Canon not keeping up with the latest waffer fabrication trends), the rest of his argument rings true.  Further, Jon Rista gives us a few potential clues about "Fat Pixel" photo-sites from Posted 29 January 2015 - 12:09 AM (pay attention to his comments on photo-site area and electron well capacities). For a Geek who really wants to Geek Out it's very interesting and potentially practical stuff

Lastly, here is a link to a Wiki page where we can see a Sony camera and function matrix.  In addition, Sony has a matrix of ADC bit depth information.  This too is organized by camera and function.  Using this information we can begin to guess how much circuitry is implemented in the various cameras that influence things like read speed and camera capabilities.

To me the most important outcome of all the Geeky Nerdiness is the acquisition of knowledge that can be applied and balanced in the real world when pursuing highest possible image quality.

Update of a common 'net expression from when dinosaurs roamed the earth: Base ISO 1, f/8 and be there.  

If it's dark out, Base ISO 2, wide open and be there.

Tuesday, February 28, 2023

Reconsidering the Sony A7S... [part Three]

My Gears have been GroundI have vented my spleen.  And I feel I finally have my arms around the nature of the problem.

The problem is this: Many of the camera review sites post numbers grading various aspects of sensor performance that are for various reasons problematic.

I don't yet have a solution to the problem.  All I know is that converting images to TIFF, using in-camera JPG processing, or image downsizing to "normalize" (whatever that means in the context of dynamic range and color depth) all have their problems from an image quality measurement and comparison point of view.

Further, I don't see a way to evaluate if Sony's claim of 15EV+ dynamic range for their A7S is "real" or not. I'm not sure if this small 12mpixel Full Frame sensor-ed device is part of the "Fat Pixel" family of Mythic Pixie Dust cameras.  If it is, how might we _see_ or _measure_ the Magic?

Not knowing entirely how to proceed I will set all this aside and go have a long think.

In the meantime, the friend who shared how some standalone Noise Reduction software can work its magic on noisy images suggested something to me.  If I had a problem with downsizing A7R images to A7S image size (where the downsized image is _always_ better with large sensored cameras in DxOMark and DPReviews reviews), why not upsize the smaller image to the A7R size?

Hence this blog entry.

Over on YouTube there is a video by a woman who printed a couple A7S images to 47 inches long.  That's rather big, right?  Is it any "good?"  Hmmm...

There's a guy who ran a comparison study with two photographer friends where they tried to see any differences between A7S and A7 (24mpixel) prints.  It seems that differences between the A7 and A7S are best seen when comparing identical images.  When looking at standalone prints it seems much more difficult to tell which print was made by which camera.

Then there is the ability to upsize images in a somewhat meaningful way.  I've been looking at this a little and have to say, the results can be impressive.

There's lots of food for thought, here.


  • Sony A7S image opened in the Gimp
    • Upsize from 4240pixels to 7300pixels
    • Apply 1pixel USM in upper layer with 70% opacity

Comparison ~


DPReview image

DPReviews base image 

DPReview image ~ reworked 

Processed image

Sony A7S vs A7R IQ Comparison 3

Comments ~

Let's start by looking at the 2nd, and 5th rows of images.  The 2nd row shows the native file size Sony A7R at 100 percent.  No noise reduction nor Capture Sharpen were used.  When I started this WeeLookSee I thought I might see a clear difference in resolution since the A7R has no AA filter and the A7S does.  So if this is as sharp as the A7R is without any further processing, have a close look at row 5.

Row 5 is the A7S image Rawtherapee Capture Sharpened and noise reduced, _then_ upsized 7300pixels long using the Gimp NoHalo algorithm.  Pretty amazing, isn't it?  Stare at it awhile.  Still amazing, right?  The most obvious difference to me is in trying to read the "One Way" sign in the middle of the center column of images.  Amazing.

Now let's have a look at rows 4 and 6.  Row 4 is the native file size A7R image with Rawtherapee Capture Sharpen and noise reduction.  We can just begin to be able to read the "One Way" sign.  Noise is reduced.  The over all image is looking not half bad.  Row 6 is the upsized A7S image with a 1 pixel UnSharp Mask applied.  While it looks pretty good, it should be obvious that there is more detail in the A7R image.

Lastly, looking at just the A7S upsized images in rows 5 and 6 and not trying to compare them against any of the A7R images, what do we see?  They actually look pretty good, don't they?

While I knew it already, this exercise re-enforces to me that software can play an important role in image processing.  When done with care, upsizing the 12mpixel A7S to A7R dimensions can yield interesting results.  

As I wrap up this blog entry I have to confess that I've had that long hard think.  I've thunk a bit.  I've cogitated some.  I've studied a lot.  I've learned a bunch.  I've _finally_ found a Vein of Knowledge that is proving rather useful.  

Yes, Martha, there is One More Wabbit Hole to fall down.

Sunday, February 26, 2023

Reconsidering the Sony A7S... [part Two]

I guess my Gears are easily Ground.

It started many years ago when I rode motorcycles.  My Gears were Ground by various reviewers who either said things that were silly, all too often wrong, or failed to mention things that would be important to a rider.

The first example is of a review written about a Kawasaki touring bike.  The reviewer noted all the usual things, except one.  The tourer had a clearly _over_ spec'd alternator.  Why would that be? Well, Kawasaki knew that buyers of their tourer would want to add accessory lights and things that would use the extra juice.  But reading the review, no one ever knew what the standard issue bike was capable of.

The second example involves the early Ducati 900SS.  They are narrow, light, and made sufficient horsepower to throw you down the road at 135mph in stock trim.  More importantly was the fact the bike was rock solid stable at all speeds. It was like riding a laser beam.  I kid you not.  The stability instilled a certain confidence.

Compared with this, Japanese motorcycles from the 1980's tended to wander, have slightly vague handling, and might induce a "tank slapper" under the wrong conditions.  I know these things first hand because of the RD400, 550cc Vision, 650 Seca, GS500, and three road-worthy Ducatis (bevel and belt-drive) I used to own.  Fortune was really on my side at the time as I got to ride one-each of every model bike ever made or imported to the US during the early to mid-1980's. I accidently bounced the valves of a Kawasaki 750cc Turbo prototype.   14,000RPM was a bit beyond spec, but the bike survived with zero problem.  Yet it was the Ducati that instilled confidence.

This is what I look for in reading reviews. Confidence.  I want confidence that people know what they're talking about.  I want confidence that their findings are worth considering.

A photography example of what I mean comes from reading just about anything written by Geoffrey Crawley.  His reviews were in-depth, concise as possible, and informative.  If there is a detail that he felt was important to share, he would expand the subject until everything became clear, such as when he wrote about the 1/1000th of a second top speed of the original Nikon F.  I have confidence that he knows what he's written about.

Similarly, I enjoy reading Roger Cicala at Lens Rentals. He posts not just his findings, but _how_ he got to those findings in the first place.  He publishes his methods and _reasoning_ behind those methods.  It's a real joy to read, learn, and understand.  M. Cicala instills confidence.

What fails to instill confidence is when well-established reviewers make decisions, "test" something, assign numbers, and post the "results" without sharing at the same time clear methods and limitations.  Specifically, converting images to TIFF, using in-camera JPG processing, or image downsizing to "normalize" (whatever that means in this context) all have their problems from an image quality measurement and comparison point of view.  But I never knew about the limitations until I dug into the subject.  Information wasn't easy for me to find.

Why is any of this even remotely important to me?  I prefer accuracy and full truth so that I can make the best informed purchase and use decisions that I can.  I'd like to be able to consider the trade-offs as they really are.

In the case of the Sony A7S, I passed on two inexpensive, good condition examples thinking that they were of lesser stills image quality than the A7 or A7R.  I'm not sure what the real answer is, but I'm learning it's not exactly how I read about it on various "reviewer" sites.  Of course it's too early for me to know if it's worth plunking down good hard earned money for one. 

Building on the previous blog entry, I was interested in seeing what happened to Sony A7S and A7R DPReview supplied sample AWR (RAW) images when I applied Rawtherapee Capture Sharpen and Luminance Noise Reduction.  Capture Sharpen should be obvious since the A7S reportedly comes with an AA filter.  Noise Reduction would be applied to see if the heavily amplified dark areas of the scene could be quieted down.  The A7R dark areas in particular look to me pretty ghastly at the native sensor resolution image size compared to the A7S.  Perhaps noise reduction could help the A7R image?


     NOTE: the DPReview images were 1.7 and 2EV underexposed and filed under the heading of "Dynamic Range in the real world."  They were trying to share something they "saw" regarding noise control and dynamic range.  So I needed to do what they did, raise the shadows to the point the overall image looked somewhat OK, then consider the noise and dynamic range, particularly in the dark amplified areas.  Where DPReview used Lightroom's Exposure Value slider, I used Rawtherapee's Lightness so as to avoid blowing out the highlights.

Comparison ~

DPReview image

DPReviews base image 

DPReview image ~ reworked

Processed image

Sony A7S vs A7R IQ Comparison 2

Comments ~

Considering how dark the shadows are in the original un-modified image are, they were shot 1.7EV and 2EV under-exposed, the processed results are rather amazing.  I could never ever dig this deep into the shadows with my old Canon DSLR systems.  Sony has done a great job.

Looking again at the original native size images, the A7R shows more noise than the A7S in the shadows.  To me the difference is obvious.  I'll say it again, the A7S native size image heavily amplified shadow area image shows _less_ noise than the native size A7R image.

Then I did two things.  I Capture Sharpened each native size image and Noise Reduced them. 

The Capture Sharpen step makes the A7S image very crisp and sharp-looking.  The A7R gets that little bit of extra sharpness, too.  I've come to like using this function early in my image processing.  Images don't look overly sharp to my eyes.

As I've said else-where, what surprises me is that the A7R no-AA filter image isn't sharper looking than the AA filtered A7S.  I'm not sure how to evaluate this.  Though perhaps it should be noted that the "One Way" sign is nearly readable in the A7R Capture Sharpened/Noise Reduced image.

Looking at the effect of Noise Reduction on the A7S and A7R image shadow areas gives just about what we might expect.  The A7S goes from a small noise to even smaller noise patterns.  The A7R goes from moderate sized noise to smaller though still obvious noise patterns.

A friend sent me samples of what can happen when using a specialized noise reduction software.  The results are impressive.  In fact, I can easily imagine that a properly-processed very under-exposed image can be massaged into something pretty darned nice.  My suggestion would be if the highlight areas aren't showing noise, then mask the shadow areas and apply noise reduction there.  At which point we've crossed over from considering the sensor to taking advantage of advances in image processing software.

Coming back to sensors and working with several Sony A7S AWR (RAW) sample images downloaded off the 'net I feel I'm beginning to understand what Michael Reichmann was saying about medium format sensors and the A7S.  I'm not sure it adds up to much.  I don't hear people raving about the A7S image quality over, say the A7 or sensors from other camera manufacturers.  Perhaps it only matters when the winters are dark, cold, snowy, and I have way way too much time on my hands, but I think there _might_ be something there.

Chasing Pixies has become my day job.  

Oh, but I have one more step to take in this Wacky Adventure.  Stay tuned for part Three.

Friday, February 24, 2023

Reconsidering the Sony A7S... [part One]

I'm still thinking about the Sony A7S.  I'm not sure why, but I am.  Er, well, yes, I do know why I'm still thinking about it.  Something is Grinding my Gears.

Michael Reichmann wrote some years ago about how the A7S files "felt" similar in quality to the Kodak CCD medium format sensor output.  I couldn't help but notice he didn't say the same thing about the A7 nor the A7R sensors.  Both cameras had been on the market a year before the A7S was introduced.  

The 24mpixel A7 camera produces even now beautiful images for me.  What could be better?  Well... maybe there was a years worth of developmental "baking" of more quality into the A7S over the earlier sensors?  Or, as was the thought at the time, the 8,4micron photo site size the Source of Fat Pixel Goodness?

I read the DPReview review of the A7S, found and downloaded two ARW (RAW) samples.  I thought it might be interesting to see how they behaved when subjected to my standard image processing using Rawtherapee and the Gimp.  It feels like there might be an opportunity to learn something using someone else's comparison images.


    NOTE: the DPReview images were 1.7 and 2EV underexposed and filed under the heading of "Dynamic Range in the real world."  They were trying to share something they "saw" regarding noise control and dynamic range.  So I needed to do what they did, raise the shadows to the point the overall image looked somewhat OK, then consider the noise and dynamic range, particularly in the dark amplified areas.  Where DPReview used Lightroom's Exposure Value slider, I used Rawtherapee's Lightness so as to avoid blowing out the highlights.

Comparison ~

DPReview image

DPReviews base image 


DPReview image ~ reworked

DPReviews base image
with my processing applied


Sony A7S vs A7R IQ Comparison 1

Comments ~

The rather heavy Rawtherapee processing was needed to bring up the shadows of the two under-exposed images borrowed from DPReview.  These images are trying to stress the image capture system by amplifying (expanding) the dark areas so that we can evaluate things like dynamic range and sensor noise (particularly in the expanded/raised shadow areas).  Most of the time many of us would never shoot a photograph in the real world in this way, and as we will see in future posts, I have questions about the validity of this as a test method.

The 100ISO base image at native 4246x2840 pixel resolution Sony A7S image looks smooth in the light areas, and slightly noisy in the deep and now raised shadow areas.  This is pretty remarkable to me, particularly when I look at the base uncorrected image and compare it to the lightness and curves processed result..

To my eyes the A7S has lower noise than the 7354x4912pixel A7R image.

We were led to expect this, right?  Seems intellectually correct.  Lower pixel density sensors have lower noise than sensors where the photosites are packed in like sardines, right?

So what's my Gear Grinding problem?  Well, here it is.  DPReview wrote "... Who wins? In a nutshell? a7R, hands down..."

Please tell me my eyes are really bad, or tell me that we see the same thing.  I don't see where the A7R image "wins" in any dimension except size (har!).

The Gear Grinding problem is partially explained in the next sentence.  "...we've downscaled [emphasis mine] the a7R image to the a7S' 12MP resolution, the a7R offers more detail and cleaner shadow/midtone imagery compared to the a7S. Downscaling the a7R image also appears to have the added benefit of making any noise present look more fine grained; the a7S' noise looks coarse in comparison..."

This is where they have me really and honestly Gear Grindingly stumped.  If you're trying to compare image qualities between different sized sensors, why on Gawds Green Terra Firma would someone want to downsize a larger image to the smaller sensor dimensions?  

I'm serious.  Why?  What does it show?  What does it prove?  How would it be relevant to photography except when someone is willing to throw away potential resolution to, what?, prove an Obvious Point?

The Obvious Point being that something called "pixel binning" or downsizing works.  Noise across an image will be averaged out (or reduced).  With that will come more accurate colors (averaging out the chromatic variations brought, in part, by subtle noise, even at low ISOs).

What if, instead, reviewers were to take a 4240 pixel long section out of the A7R and compare directly like size image to like size image?  Adjust for image field by correctly selecting the focal length, of course.  Wouldn't that make for a more honest evaluation, if what you're trying to evaluate were things like dynamic range, color depth, resolution, and noise?

And speaking of resolution, the A7R reportedly comes without an AA filter and is supposed to be sharper than those that come with AA filters (such as the A7 or A7S).  And yet, the A7S appears to deliver resolution as well as the A7R.  Here is Yet Another Wabbit Hole to fall down, but we'll save this for another time.

In the end the DPReview provided AWR RAW sample A7S image looks to me to be smoother and sharper than the A7R _before_ I apply any noise reduction or capture sharpening.  I'll make those adjustments in part Two of this series so we can begin to see what the differences might be as we start processing an image.  

For the moment, however, can someone explain to me like I'm in kindergarten how downsizing an A7R image to A7S dimensions gives us a valid departure point for understanding relative image quality?

Sunday, February 19, 2023

Incomplete understanding can lead to wrong conclusions...

I stepped in "it" recently and I'm here to confess the errors in my understanding.

Retromobile ~ 2023

In photography I've run across for years and years loads of marketing promises, distortions, half truths, selecting data that fits preconceived curves, bending of reality, and outright lies.  

It's in part why I've written here and elsewhere about what I've found.  Methods, processes, setups are explained so that anyone can see for themselves what's what.  You can't (and shouldn't) blindly trust me.  Trust if you must, but verify.

Recently falling down a Wabbit Hole labeled "Fat Pixels", I came to a set of conclusions that, in retrospect, were malformed.  I used DxOMark's color depth and dynamic range numbers to evaluate the image quality of Sony A7, A7R, and A7S cameras.  Based on this I made decisions on gear purchases and spoke ill of the "Fat Pixel" discussion, particularly as it related to the Sony A7S and its "Fat Pixel" 8,3micron sensor site sizes.

Retromobile ~ 2023

What I was trying to evaluate was whether there might be some magic "Pixie Dust" in the small 12mpixel system that I might find good reason to enjoy in my own image making.  Of course I did nothing to evaluate Sony's own 15+EV dynamic range claims.  I assumed (bad, bad, bad, I know) that Sony was simply wrong and gone overboard in their marketing claim.

Well, the actual error I made came from my lack of understanding how DxOMark's numbers are calculated.  I relied on the numbers alone to tell the whole story. In my defense, when I read through DxOMark's comments I didn't see anything that led me to believe they were considering anything but the entire sensor output.

However, reading criticisms of DxOMark I've come to understand that sensor output is _downsized_ from whatever the native sensor resolution is down to 8mpixel before an evaluation of dynamic range and color depth are made.  Right here is the source of the error.

Retrombile ~ 2023

I should've guessed at this and wondered at the time how the numbers were coming out the way they were.  Looking at "test" results for various Sony Full Frame cameras we can see that the higher megapixel count sensors ALWAYS score better than the smaller sensors.  Always.  This is true for color depth and dynamic range.  Spend some time on their site to verify my claim.

How can this be?

In the process of downsizing an image for evaluation, noise is averaged out, dynamic range increases (by lowering the noise floor), and colors become purer.  I know this from my three image stack/average comparisons.  Noise averaging works similarly when downsizing.  We're not seeing anything that comes straight off the sensor when we read DxOMark's numbers.  The image has passed through some kind of conversion process.

There are other sites that compare and test sensors in camera systems, of course.  But these convert from the original RAW file into TIFF.  So we're not seeing anything straight off the sensor there, either.  While this might be a different process to the one DxOMark uses, there is a conversion of the image and I'm not convinced this won't effect the output.  There's no method evaluation information that I've easily come across that guides readers one way or another.

Retromobile ~ 2023

How, then, do we get at an understanding of color depth and dynamic range?  How do we get beyond the various comparison site's use of image conversions which can alter evaluations and results?  If someone knows, I'm all ears.  Please.  I'd like to learn.

As is often the case when I look into commercially available photography tools, I turn to Real World to see what's going on.  In this way, I found a site that allows people to download Sony A7S AWR files.  Grabbed a couple images and put them through my normal Rawtherapee processes.  This is something I should've done earlier, but am happy I finally got around to doing this now.

I can't quantify what I saw, of course.  There is nothing being measured.  All I can say is that A7S AWR files look like great starting points for still image processing.  They look very subtly different than my A7 24mpixel files.  The A7S colors are deep and rich.  The resolution is outstanding after being passed through Capture Sharpen.  The low ISO noise levels are very low, indeed.

Perhaps I should after all pick one up and try it?  I still have a few Nikkor lenses to sell.  Then we'll have a look.  A7S prices are like A7 around these parts.  Cheap, and getting cheaper by the day.

Unexpected Renaissance ~ Retromobile ~ 2023