It’s an argument that predates photography itself. What constitutes “reality” in photography is a common debate topic on the internet. On Friday, Reddit user u/ibreakphotos posted a couple Moon pictures.

The photographs in question contrast a blurry Moon with one that is more sharper and clearer. The latter is a better representation, but it has a significant flaw. At least not in the sense that most of us consider a photograph to be genuine.

The image was created using a programme called Adobe Photoshop Elements. It was then manipulated using Adobe Photoshop Elements to create the image. Although it may seem like a stretch to call it a photo given all that smartphone cameras can currently achieve, it’s actually more of a small step than a great leap.

Samsung is no stranger to machine learning; through its aptly dubbed Space Zoom, it has spent the last few years experimenting with high zoom boosted by AI. With the help of machine learning, Space Zoom often combines data from an optical telephoto lens with many frames shot quickly in succession to produce images of faraway objects that are much crisper than those you would typically receive with a smartphone camera. It’s excellent.
In this case, Samsung doesn’t exactly seem to be accomplishing that. Samsung’s processing pipeline only functions with the data in front of it outside of moon photography. It will smooth out the edges of a structure photographed with a shaky hand from a distance of many blocks, but it won’t add windows to the building’s side that weren’t there before.

The Moon appears to be a unique instance, and the astute examination by ibreakphotos reveals the manner in which Samsung is performing a little more processing. They presented a deliberately hazy image of the Moon on a screen before the camera, then took a picture of it.

The final image displays details that Samsung’s processing couldn’t have reasonably added because they were blurred away in the original photograph. Instead, it adds lines and, in a later test, moon-like texture to portions of the original photograph that were clipped to white. While not a complete copy and paste, it also isn’t just boosting what it sees.

But… is that that bad? The fact is that smartphone camera currently employ numerous secretive procedures in an effort to take pictures that you enjoy. Your photographs are still being altered to brighten faces and make minute details pop in the correct places, even if you disable every scene optimization and beauty option.

For example, Face Unblur on more current Google Pixel phones will utilise machine learning to merge an image from the ultrawide camera with an image from your primary camera to give you a sharp final image if your subject’s face is slightly blurred by motion.

Read More:

OpenAI Announces GPT-4, Claims It Can Surpass 90% of Humans On The SAT

The Top 8 Alternatives to Moviesming for Downloading Movies in 2023

A 90 Hz Refresh Rate and More Memory Are Revealed by A Google Pixel 7a Leak.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

What Does BBL Mean? Check Out Examples of BBL Behavior

TikTok has become one of the major hubs of viral content on…

MadFit App Cost: Check Out To Know The Quarterly And Annually Cost!

The Madfit program is another fitness app you can use on the…

Google Now Permits Spotify And Bumble To Pay U.S. Users From Inside Their Apps!

It was revealed by Google on Thursday that the Google Play app…