Apologies if this has been asked. I was reviewing the transcript iOS 16 camera improvement as it relates to depth and depth maps. It’s my understanding that said models including LIDAR scanners play a big role in the resulting depth maps that are captured when taking an image. I know this is an improvement from models that rely on Truedepth cameras, but how does this play into the new Lock Screen setup?
Is this effect created solely through software automations or does the presence of LIDAR and depth maps influence the results when it comes to creating the depth effect that pulls subjects to the front while allowing the background to remain in the foreground?
Thanks so much in advanced!