Before we look at the results, let's compare the two approaches.
Image Stacking for Super-Resolution -
- Shooting handheld
- Fire-off a half dozen images of the scene using a high-speed multiple exposure mode
- In processing -
- Align the images
- Stack the images as layers in PS or Gimp
- 2X bi-cubic 600DPI (minimum) up-rez every image
- Set the opacity of each layer
- Flatten the image
- Apply an Unsharp Mask of 2 pixels
- Image noise is reduced significantly
- Light/Dark transition zones are smooth and "creamy"
- 2x up-rez gives a somewhat useful, if not exactly brilliant, increase in viewable detail
- If all else fails, at least there is an image to begin with, up-rez'd or not.
- The flip-side of the third entry under "Pros", even limited to 2x up-rez, the increase in viewable detail isn't exactly brilliant
- Limited to photographing static subjects
- Limited to 2x up-rez (or thereabouts)
- How do Sony, Pentax, and Olympus make their camera sensor "wiggle" up-rez function work so well? If you know the answer, I would be happy to learn a bit more about how they achieve such beautiful results in-camera. In principal the two approaches (IBIS sensor motion and handheld image stacking) should be fairly equivalent, but clearly they're not. I'd like to see if there might be a way to refine the handheld multiple exposure approach.
- Set exposure to something at accurately expresses highlight and shadow detail - use this combination of aperture, shutterspeed, and ISO for taking all the "section"
- Shoot small-ish "sections" of the scene where images sequentially overlap eachother, making sure you've covered the entire scene (see: Breznier Method)
- In processing -
- Enormous sensor-level resolution images can be created (example - the original in this case is 20,000 pixels in the long dimension)
- Depending on the software, various projection methods give optional final image "looks" - ie: you can go so far as to emulate fisheye distortions if you wanted to (example - not fisheye, but look at the ceiling "pull")
- Images need to be planned with final images visible only after processing - which means it really helps to "pre-visualize" a scene
- Depending on the software and accuracy of shooting image "sections" there may be distortions (example of failing to accurately rotate the lens around the nodal point)
- Limited to static subjects
- Slow setup time - suggest manual exposure metering to help the "sections" stitch correctly and to keep the overall final scene exposure even and correct
Comparison of Resultant Images -
[If you click on the image it'll take you to the Flickr hosting site. Once there, look at the file at full resolution. In many cases the differences between lenses is small and likely can't be seen until you take a squint at the comparison at 100 percent.]
The obvious conclusion is this - even though the stitched image is 1400 pixels shorter in the long dimension than the stacked up-rez'd image, the stitched image clearly resolves small details better than the stacked image.
My friend is, of course, correct. Check out the section titled "4-Way Focusing Rail..." The image stitch approach can be very nice indeed, but only if you plan ahead.