Post not yet marked as solved
Can a developer access the dot projector and other hardware used by faceid in order to scan objects?
Post not yet marked as solved
UVCAssistant got crash at M1 chip with Monterey with 10g speed. 5g speed is ok, Mac OS before Monterey is ok too.
The camera device can be recognized by the system, but QuickTime Player can't display video stream.
Here is UVCAssistant crash log. hope it's helpful.
crash log
Post not yet marked as solved
Hi,
The infrared image taken by TrueDepth camera seems not to include the heat data, am I correct to say that? Looks like it covers the depth of an image only.
Post not yet marked as solved
Hi,
just experienced using the Apple demo app for Truedepth images on different devices that there are significant differences in the provided data quality.
Data derived on iPhones before iPhone 13 lineup provide quite smooth surfaces - like you may know from one of the many different 3D scanner apps displaying data from the front facing Truedepth camera.
Data derived on e.g. iPhone13 Pro has now some kind of wavy overlaying structures and has - visually perceived - very low accuracy compared to older iPhones.
iPhone 12pro: data as pointcloud, object about 25cm from phone:
iPhone 13pro: data as pointcloud, same setup
Tried it out
on different iPhone 13 devices, same result, all running on latest iOS. Images
captured with the same code. Capturing by using some of the standard 3D scanner
apps for the Truedepth camera are providing similar lower quality images or
point clouds.
Is this due
to degraded hardware (smaller sized Truedepth camera) on new iPhone release or
a software issue from within iOS, e.g. driver part of the Truedepth camera?
Are there
any foreseen improvements or solutions already announced by Apple?
Best
Holger
Post not yet marked as solved
Currently i am getting depth data from delegate and even i converted to CIImage to check it's out put and it's gray scale but i cannot append that pixel buffer to AVAssetWriterInputPixelBufferAdaptor becuase once i tried to save in photo gallery i get error mentioned below.
Error:
The operation couldn’t be completed. (PHPhotosErrorDomain error 3302.
Setup:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInDualCamera, .builtInTrueDepthCamera, .builtInDualWideCamera],mediaType: .video, position: .back)
I tried both video pixel formats:
videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_DepthFloat16]
videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCMPixelFormat_422YpCbCr8]
I want to record the TrueDepth or Dual camera's depth data output when recording the video data. I have already managed to get the AVCaptureDepthDataOutput object and displayed it in realtime, but I also need the depth to be recorded as an individual track of AVMediaTypeVideo or AVMediaTypeMetadata in the movie, and read them back for post processing.
Compared to use AVCaptureMovieFileOutput, I use movieWriter and AVAssetWriterInputPixelBufferAdaptor to append pixel buffer. I have tried to append the streaming depth as normal AVAssetWriterInput with AVVideoCodecTypeH264, but failed.
Is it possible to append depth data buffer in the same way as video data for depth data, or with any other way of doing it?
Post not yet marked as solved
I want to allow user to record a video only in portrait mode(orientation) and restrict the user to record in landscape. I'm using UIImagePickerController and I couldn't find any orientation options in it. Could anyone help me out in this?
Post not yet marked as solved
In some cases when I use "builtInDualCamera" capture device I get white blank depth photo. In other cases it works fine.
So I guess there is a limitation.
I use this code to get depth image:
private func saveDepth(name: String, photo: AVCapturePhoto) throws -> URL {
guard var depthData = photo.depthData else {
throw CameraError.photo("No depth data")
}
print("DEBUG: depth data quality \(depthData.depthDataQuality)")
if let orientationValue = photo.metadata[kCGImagePropertyOrientation as String] as? UInt32 {
if let orientation = CGImagePropertyOrientation(rawValue: orientationValue) {
depthData = depthData.applyingExifOrientation(orientation)
}
}
let converted = depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let image = CIImage(cvPixelBuffer: converted.depthDataMap)
guard let colorSpace = image.colorSpace else {
throw CameraError.photo("Can not get image color space")
}
guard let data = context.tiffRepresentation(
of: image,
format: .Lf,
colorSpace: colorSpace,
options: [:]
) else {
throw CameraError.photo("Can not create TIFF")
}
let url = Path.cachePath.appendingPathComponent(name + "_depth.tiff")
try data.write(to: url)
return url
}
Any information about maximum distance or some other limitations for dual camera?
Post not yet marked as solved
Hello all,
I'm having some issues with the now available getUserMedia api available in the WKWebView.
On a page, I will access the camera using the following code:
navigator.mediaDevices.getUserMedia({
		video: {
				facingMode: "user"
		},
		audio: false
}).then(function(webcamStream) {
		document.querySelector("#lr_record_video").srcObject = webcamStream; /* this is a HTML video tag available on the page */
}).catch(function() {
		console.log("fail")
});
This... mostly works. Unlike in Safari (and now Chrome), instead of the video element just showing what is in the video track of the webcamStream MediaStream object, it opens up a "Live Broadcast" panel and the video track pauses whenever this is closed. Is there anyway to replicate the behaviour in Safari and Chrome, where there is no panel popup?
Thanks
Post not yet marked as solved
I've modified a SwiftUI tutorial with a gesture button to record videos like so:
var longPress: some Gesture {
DragGesture(minimumDistance: 0)
.updating($isDetectingLongPress) { currentState, gestureState,
transaction in
gestureState = true
transaction.animation = Animation.easeIn(duration: 2.0)
model.captureButtonHold()
}
.onEnded { finished in
print("Finished long press")
model.stopRecording()
self.completedLongPress = true
}
}
However, the captureVideoAndMetadata func (below) doesn't capture or print anything except Capture video called..., and of course after the gesture is complete it prints (Finished long press). Any suggestions? Thanks in advance
This is the captureVideoAndMetada func:
private func captureVideoAndMetadata() {
logger.log("Capture video called...")
dispatchPrecondition(condition: .onQueue(.main))
/// This property retrieves and stores the video preview layer's video orientation on the main
/// queue before starting the session queue. This ensures that UI elements can be accessed on
/// the main thread.
let videoPreviewLayerOrientation = session.connections[0].videoOrientation
guard let movieFileOutput = self.movieFileOutput else {
return
}
sessionQueue.async {
if !movieFileOutput.isRecording {
if UIDevice.current.isMultitaskingSupported {
self.backgroundRecordingID = UIApplication.shared.beginBackgroundTask(expirationHandler: nil)
}
// Update the orientation on the movie file output video connection before recording.
let movieFileOutputConnection = movieFileOutput.connection(with: .video)
movieFileOutputConnection?.videoOrientation = videoPreviewLayerOrientation
let availableVideoCodecTypes = movieFileOutput.availableVideoCodecTypes
if availableVideoCodecTypes.contains(.hevc) {
movieFileOutput.setOutputSettings([AVVideoCodecKey: AVVideoCodecType.hevc], for: movieFileOutputConnection!)
}
// Start recording video to a temporary file.
let outputFileName = NSUUID().uuidString
let outputFilePath = (NSTemporaryDirectory() as NSString).appendingPathComponent((outputFileName as NSString).appendingPathExtension("mov")!)
movieFileOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)
print("Creating capture path: (movieFileOutput)")
} else {
movieFileOutput.stopRecording()
}
}
}
Post not yet marked as solved
Hello,
I have an app under development that uses the onboard camera to capture images. The app behaves very similarly to the camera app, in that there is a button to capture images and a horizontal listview below it to show a preview of all captured images.
The app works fine when taking 5-6 images, but when the image count reaches 15-18 during the same session, the app crashes with the below error.
[SOServiceConnection] <SOServiceConnection: 0x28363a9c0>: XPC connection interrupted
I am unable to decipher the error code and hence figure out a solution to the problem.
I am assuming that it has something to do with the preview images or the number of images occupying the RAM.
Any help in this regard will be highly appreciated.
Post not yet marked as solved
I have been unable to capture Live Photos using UIImagePickerController. I can capture still photos and even video (which is not my scenario but I checked just to make sure), but the camera does not capture live photos. The documentation suggests it should (source):
To obtain the motion and sound content of a live photo for display (using the PHLivePhotoView class), include the kUTTypeImage and kUTTypeLivePhoto identifiers in the allowed media types when configuring an image picker controller. When the user picks or captures a Live Photo, the editingInfo dictionary contains the livePhoto key, with a PHLivePhoto representation of the photo as the corresponding value.
I've set up my controller:
let camera = UIImagePickerController()
camera.sourceType = .camera
camera.mediaTypes = [UTType.image.identifier, UTType.livePhoto.identifier]
camera.delegate = context.coordinator
In the delegate I check for the Live Photo:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let live = info[.livePhoto] as? PHLivePhoto {
// handle live photo
} else if let takenImage = info[.originalImage] as? UIImage, let metadata = info[.mediaMetadata] as? [AnyHashable:Any] {
// handle still photo
}
}
But I never get the Live Photo.
I've tried adding NSMicrophoneUsageDescription to the info.plist thinking it needs permissions for the microphone, but that did not help. Of course, I've added the NSCameraUsageDescription to give camera permissions.
Has anyone successfully captured Live Photos using UIImagePickerController?
Post not yet marked as solved
Im developing a small C-Tool which needs access to the camera. When starting this tool indirectly from another application calling +requestAccessForMediaType: of AVCaptureDevice doesnt show the necessary autorization dialog. Instead the application quits immediatly without any message. Info.plist is available from mainbundle and the described keys for accessing camera and microphone are present and valid.
Same behaviour with Xcode 12 on Catalina and XCode 13 on Monterey, so I assume its not a bug.
What can I do to make the authorization dialog show up ?
Post not yet marked as solved
How to permanently allow usage of camera in our CUSTOM iOS app
post v14.0 , Camera alert is seen again and again for same session despite giving the Camera permissions in settigns.
Post not yet marked as solved
I’m building a react-native app that crashes when the camera is opened and then is cancelled. I’m using the react-native-image-picker. Any thoughts?
Post not yet marked as solved
HomeKit camera not work in different networks but the same geolocation.
Look screenshot:
Case 1 - live streaming not working from camera
Case 2 - live streaming working from camera
The problem is that the homkit adds "Apple TV home" and "HomePod home" in the home centers to the "Garage" house and makes "Apple TV home" the main home center, but these local networks are not connected in any way, and as soon as "Apple TV home" becomes the home center, it stops streaming live video.
How to fix it?
Post not yet marked as solved
I wrote the code to get the video through AVCaptureSession.
the code looks like the following.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
guard let baseRawAddress = CVPixelBufferGetBaseAddress(pixelBuffer) else {
return
}
// Convert the base address to a safe pointer of the appropriate type
let opaquePtr = OpaquePointer(baseRawAddress)
let baseAddress = UnsafeMutablePointer<UInt8>(opaquePtr)
let byteBuffer = UnsafeMutablePointer<UInt8>(baseAddress)
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
}
Is gamma correction already applied to the color information of the pixels(byteBuffer in above swift code) retrieved here?
If so, is it possible to retrieve the gamma value?
Post not yet marked as solved
I am developing a camera app and need to get realtime greyscale video from camera.
I am using AVFoundation framework and set video output like this;
private func setupCamera() {
...
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : Int(kCVPixelFormatType_32BGRA)]
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
...
}
For captureOutput, I write like this;
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)!
let bytesPerRow = UInt(CVPixelBufferGetBytesPerRow(imageBuffer))
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let bitsPerCompornent = 8
var colorSpace = CGColorSpaceCreateDeviceGray()
var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext(data: baseAddress, width: Int(width), height: Int(height), bitsPerComponent: Int(bitsPerCompornent), bytesPerRow: Int(bytesPerRow), space: colorSpace, bitmapInfo: bitmapInfo.rawValue)! as CGContext
let imageRef = context.makeImage()!
let image = UIImage(cgImage: imageRef, scale: 1.0, orientation: UIImage.Orientation.up)
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
captureImageView.image = image
}
The point is
var colorSpace = CGColorSpaceCreateDeviceGray()
var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
The image I can get is something like 'greyscale video' but not complete video image.
How can I get complete greyscale video?
Please advise me if anyone knows the answer to get greyscale realtime video.
Post not yet marked as solved
I would like to create an application that performs recognition by looking at the color state of a specific area of an camera image.
Therefore, I would like to know to if the color information can be accurately obtained.
(I would like to know if it is linearly related to physical quantities.)
Question 1
Does the RGB image acquired by AVCapturePhotoOutput have gamma correction applied? If I want to remove the gamma correction, can I get the gamma value from the API? And are there any other corrections that should be turned off? I was able to turn off the exposure adjustment and white balance auto-tuning.
Question 2
If I use a RAW (Bayer) image that can be acquired with AVCapturePhotoOutput, can I get the value of each pixel?
Question 3 (optional)
I understand that camera images can be acquired through various processes (dead pixel removal, black level adjustment, noise reduction, white balance, demosaicing, gamma correction ...etc ).
It would be very helpful to know the processing pipeline in the iPhone.
And I would also like to know which step the RAW and RGB images are output from.
Is this information available to the public?
Post not yet marked as solved
Hi,
Is there a C equivalent for the Swift/Objective-C version of the ImageCaptureCore:
https://developer.apple.com/documentation/imagecapturecore?language=objc ?
Regards,