iPhone 7plus depth map

Is or will there be a built in feature to generate and access a depth map from the dual cameras on the iPhone 7 plus?

We have all seen the Apple key note where they demonstrate how to use the cameras to create a shallow depth effect, but reading the API-reference I can't read more than how to access the raw input from the two cameras.

Focus pixels is clearly something else since it is supported by earlier devices.

Answered by Media Engineer in 177133022

I'll send out a post soon about iOS 10 APIs that are specific to the iPhone 7 and 7 Plus cameras. The short answer is that no, depth maps will not be available in this release.

Accepted Answer

I'll send out a post soon about iOS 10 APIs that are specific to the iPhone 7 and 7 Plus cameras. The short answer is that no, depth maps will not be available in this release.

Lacking access to the depth map, an API call that would allow us to capture an image from both cameras at the exact same moment would be extremely useful. Without that sort of API access, the dual cameras aren't very useful for anything other than zooming in with higher image quality. The only reason apple is able to create such a cool effect with portrait mode is their unfettered access to both camera streams at the same time, which sadly, developers have no access to. What can we do to change this?

Vote for features using bugreport.apple.com, please.

Awesome that this feature was added in iOS 11. Having a little difficulty getting it working however. I'm adding an AVCaptureDepthDataOutput to the AVCaptureSession but not sure if I'm doing it right? Any advice on getting this working?

AVCaptureDepthDataOutput *depthDataOutput = [AVCaptureDepthDataOutput new];
[depthDataOutput setDelegate:self callbackQueue:videoDataOutputQueue];

AVCaptureDepthDataOutput *depthDataOutput = [self createDepthDataOutput];
if ( [self.captureSession canAddOutput:depthDataOutput] )
{
    [self.captureSession addOutput:depthDataOutput];
}

Here's the errors I'm getting: (I've successfully set the activeDepthDataFormat)


2017-06-06 12:40:20.967239-0700 YoPuppet-ios[4783:1659940] [] FigDerivedFormatDescriptionGetDerivedStorage signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (!desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigFormatDescription.c line 683

2017-06-06 12:40:20.967269-0700 YoPuppet-ios[4783:1659940] [] CMVideoFormatDescriptionGetDimensions signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (NULL desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigVideoFormatDescription.c line 384

2017-06-06 12:40:20.967642-0700 YoPuppet-ios[4783:1659940] [] FigDerivedFormatDescriptionGetDerivedStorage signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (!desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigFormatDescription.c line 683

2017-06-06 12:40:20.967702-0700 YoPuppet-ios[4783:1659940] [] CMVideoFormatDescriptionGetDimensions signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (NULL desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigVideoFormatDescription.c line 384

2017-06-06 12:40:20.968402-0700 YoPuppet-ios[4783:1659940] added depthdata output

2017-06-06 12:40:20.968665-0700 YoPuppet-ios[4783:1659940] [] FigDerivedFormatDescriptionGetDerivedStorage signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (!desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigFormatDescription.c line 683

2017-06-06 12:40:20.968685-0700 YoPuppet-ios[4783:1659940] [] CMVideoFormatDescriptionGetDimensions signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (NULL desc) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2010.7.3/Sources/Core/FigFormatDescription/FigVideoFormatDescription.c line 384

2017-06-06 12:40:21.490456-0700 YoPuppet-ios[4783:1660019] [] <<<< AVCaptureDevice >>>> _registerServerConnectionDiedNotification_block_invoke: (pthread:0x17e0b7000) ServerConnectionDied

2017-06-06 12:40:29.996075-0700 YoPuppet-ios[4783:1660019] [] <<<< AVCaptureSession >>>> -[AVCaptureSession _handleServerConnectionDiedNotification]: (0x1c40043c0)(pthread:0x17e0b7000) ServerConnectionDied

There appears to be an internal error that is preventing this from working in the initial release of the iOS 11 beta.

Does AVCaptureDepthDataOutput only works in iPhone 7 plus (Dual Cameras - Stereo Vision) ? or Can I make it work in iPhone 7 also (Mono Vision) ?

I believe it's only on iPhone Plus. What I don't know, however, is if works if I simply use, in a iPhone 7, a picture that was taken on the Plus. Does anybody know if that works?


Also, on session 508 they talked a bunch of stuff about scaling and normalizing the depth map, but they didn't show any code. Does anybody know how to do that?


They did talk about some new CI filters that could be used, but there's no docs about it anywhere, which inputs does it take etc.

Hi,


Do you happen to have the demos from session 508 (https://developer.apple.com/videos/play/wwdc2017/508/)?


I'm having some hard time trying to implement them and getting them to work. No idea how to use the API he mentioned to upscale and normalize the depth map.

You might be able to find what you're looking for in https://devimages-cdn.apple.com/wwdc-services/h8a19f8f/049CCC2F-0D8A-4F7D-BAB9-2D8F5BAA7030/contents.json which I found searching for the sample code I was after.

If a photo was taken on a 7 Plus and contains depth information in the file, you can read that depth data on any device running iOS 11 or macOS 10.13.

iPhone 7plus depth map
 
 
Q