Super Resolution, where you take a small image and increase its spacial dimensions and try to retain as much detail as possible, has been an interesting topic for photographers over the years.
Google recently published something related to two different approaches.
They used an artificial intelligence to learn and then apply results.
"...diffusion models, originally proposed in 2015, have seen a recent revival in interest due to their training stability and their promising sample quality results on image and audio generation. Thus, they offer potentially favorable trade-offs compared to other types of deep generative models. Diffusion models work by corrupting the training data by progressively adding Gaussian noise, slowly wiping out details in the data until it becomes pure noise, and then training a neural network to reverse this corruption process. Running this reversed corruption process synthesizes data from pure noise by gradually denoising it until a clean sample is produced. This synthesis procedure can be interpreted as an optimization algorithm that follows the gradient of the data density to produce likely samples..."
I wonder how well it might work for amateur and professional photographers? Perhaps we'll get a chance to see some day.
No comments:
Post a Comment