Hello there!
I am trying to use PHPickerViewController to load videos, but I got a problem: I could load some videos only not all.
I refer to the existing thread - https://developer.apple.com/forums/thread/652695, but dosen't work.
This is the code I persent PHPickerViewController
var config = PHPickerConfiguration()
config.selectionLimit = 1
config.filter = .videos
config.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: config)
picker.delegate = self
present(picker, animated: true, completion: nil)
Below is the relevant implementation of the method: func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]):
picker.dismiss(animated: true, completion: nil)
for result in results {
result.itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { (url, error) in
if let error = error {
print(error)
return
}
guard let url = url else { return }
let fileName = "\(Date().timeIntervalSince1970).\(url.pathExtension)"
let newUrl = URL(fileURLWithPath: NSTemporaryDirectory() + fileName)
try? FileManager.default.copyItem(at: url, to: newUrl)
DispatchQueue.main.async {
self.playVideo(newUrl)
}
}
}
Before I print error in line 5, Xcode printed 3 lines of error:
[AXRuntimeCommon] Unknown client: TestPHPicker
[default] [ERROR] Could not create a bookmark: NSError: Cocoa 257 "The file couldn’t be opened because you don’t have permission to view it." }
Error copying file type public.movie. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.movie" UserInfo={NSLocalizedDescription=Cannot load representation of type public.movie, NSUnderlyingError=0x283a4a610 {Error Domain=NSCocoaErrorDomain Code=4101 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x283a48b10 {Error Domain=PHAssetExportRequestErrorDomain Code=2 "(null)" UserInfo={NSUnderlyingError=0x283a4a550 {Error Domain=CloudPhotoLibraryErrorDomain Code=82 "Failed to download CPLResourceTypeOriginal" UserInfo=0x28219b300 (not displayed)}}}}}}
And I print error in line 5:
Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.movie" UserInfo={NSLocalizedDescription=Cannot load representation of type public.movie, NSUnderlyingError=0x283a4a610 {Error Domain=NSCocoaErrorDomain Code=4101 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x283a48b10 {Error Domain=PHAssetExportRequestErrorDomain Code=2 "(null)" UserInfo={NSUnderlyingError=0x283a4a550 {Error Domain=CloudPhotoLibraryErrorDomain Code=82 "Failed to download CPLResourceTypeOriginal" UserInfo=0x28219b300 (not displayed)}}}}}}
For some videos I can load successfully, and some videos I got error. I don't know why this happened.
I am testing this on an iPhone X iOS 14.0(18A373). Xcode 12.0 (12A7209).
Thanks for help!
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
The sample code in the Apple documentation found in PHCloudIdentifier does not compile in xCode 13.2.1.
Can the interface for identifier conversion be clarified so that the answer values are more accessible/readable. The values are 'hidden' inside a Result enum
It was difficult (for me) to rewrite the sample code because I made the mistake of interpreting the Result type as a tuple. Result type is really an enum.
Using the Result type as the return from library.cloudIdentifierMappings(forLocalIdentifiers: ) and .localIdentifierMappings(
for: )
puts the actual mapped identifiers inside the the enum where they need additional access via a .stringValue message or an evaluation of an element of the result enum.
For others finding the same compile issue, here is a working version of the sample code. This compiles in xCode 13.2.1.
func localId2CloudId(localIdentifiers: [String]) -> [String] {
var mappedIdentifiers = [String]()
let library = PHPhotoLibrary.shared()
let iCloudIDs = library.cloudIdentifierMappings(forLocalIdentifiers: localIdentifiers)
for aCloudID in iCloudIDs {
let cloudResult: Result = aCloudID.value
// Result is an enum .. not a tuple
switch cloudResult {
case .success(let success):
let newValue = success.stringValue
mappedIdentifiers.append(newValue)
case .failure(let failure):
// do error notify to user
}
}
return mappedIdentifiers
}
``` swift func
func cloudId2LocalId(assetCloudIdentifiers: [PHCloudIdentifier]) -> [String] {
// patterned error handling per documentation
var localIDs = [String]()
let localIdentifiers: [PHCloudIdentifier: Result<String, Error>] = PHPhotoLibrary.shared() .localIdentifierMappings(
for: assetCloudIdentifiers)
for cloudIdentifier in assetCloudIdentifiers {
guard let identifierMapping = localIdentifiers[cloudIdentifier] else {
print("Failed to find a mapping for \(cloudIdentifier).")
continue
}
switch identifierMapping {
case .success(let success):
localIDs.append(success)
case .failure(let failure) :
let thisError = failure as? PHPhotosError
switch thisError?.code {
case .identifierNotFound:
// Skip the missing or deleted assets.
print("Failed to find the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription)))")
case .multipleIdentifiersFound:
// Prompt the user to resolve the cloud identifier that matched multiple assets.
print("Found multiple local identifiers for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))")
// if let selectedLocalIdentifier = promptUserForPotentialReplacement(with: thisError.userInfo[PHLocalIdentifiersErrorKey]) {
// localIDs.append(selectedLocalIdentifier)
default:
print("Encountered an unexpected error looking up the local identifier for \(cloudIdentifier). \(String(describing: thisError?.localizedDescription))")
}
}
}
return localIDs
}
i have received a lot of crash log only in iOS16
the crash occured when i called :
[[PHImageManager defaultManager] requestImageDataForAsset:asset options:options resultHandler:resultHandler]
here is the crash log
Exception Type: NSInternalInconsistencyException
ExtraInfo:
Code Type: arm64
OS Version: iPhone OS 16.0 (20A5328h)
Hardware Model: iPhone14,3
Launch Time: 2022-07-30 18:43:25
Date/Time: 2022-07-30 18:49:17
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason:Unhandled error (NSCocoaErrorDomain, 134093) occurred during faulting and was thrown: Error Domain=NSCocoaErrorDomain Code=134093 "(null)"
Last Exception Backtrace:
0 CoreFoundation 0x00000001cf985dc4 0x1cf97c000 + 40388
1 libobjc.A.dylib 0x00000001c8ddfa68 0x1c8dc8000 + 96872
2 CoreData 0x00000001d56d2358 0x1d56cc000 + 25432
3 CoreData 0x00000001d56fa19c 0x1d56cc000 + 188828
4 CoreData 0x00000001d5755be4 0x1d56cc000 + 564196
5 CoreData 0x00000001d57b0508 0x1d56cc000 + 935176
6 PhotoLibraryServices 0x00000001df1783e0 0x1df0ed000 + 570336
7 Photos 0x00000001df8aa88c 0x1df85d000 + 317580
8 PhotoLibraryServices 0x00000001df291de0 0x1df0ed000 + 1723872
9 CoreData 0x00000001d574e518 0x1d56cc000 + 533784
10 libdispatch.dylib 0x00000001d51fc0fc 0x1d51f8000 + 16636
11 libdispatch.dylib 0x00000001d520b634 0x1d51f8000 + 79412
12 CoreData 0x00000001d574e0a0 0x1d56cc000 + 532640
13 PhotoLibraryServices 0x00000001df291d94 0x1df0ed000 + 1723796
14 PhotoLibraryServices 0x00000001df291434 0x1df0ed000 + 1721396
15 Photos 0x00000001df8a8380 0x1df85d000 + 308096
16 Photos 0x00000001df89d050 0x1df85d000 + 262224
17 Photos 0x00000001df87f62c 0x1df85d000 + 140844
18 Photos 0x00000001df87ee94 0x1df85d000 + 138900
19 Photos 0x00000001df87e594 0x1df85d000 + 136596
20 Photos 0x00000001df86b5c8 0x1df85d000 + 58824
21 Photos 0x00000001df86d938 0x1df85d000 + 67896
22 Photos 0x00000001dfa37a64 0x1df85d000 + 1944164
23 Photos 0x00000001dfa37d18 0x1df85d000 + 1944856
24 youavideo -[YouaImageManager requestImageDataForAsset:options:resultHandler:] (in youavideo) (YouaImageManager.m:0) 27
25 youavideo -[YouaAlbumTransDataController requstTransImageHandler:] (in youavideo) (YouaAlbumTransDataController.m:0) 27
26 youavideo -[YouaAlbumTransDataController requstTransWithHandler:] (in youavideo) (YouaAlbumTransDataController.m:77) 11
27 youavideo -[YouaUploadTransDataOperation startTrans] (in youavideo) (YouaUploadTransDataOperation.m:102) 19
28 Foundation 0x00000001c9e78038 0x1c9e3c000 + 245816
29 Foundation 0x00000001c9e7d704 0x1c9e3c000 + 268036
30 libdispatch.dylib 0x00000001d51fa5d4 0x1d51f8000 + 9684
31 libdispatch.dylib 0x00000001d51fc0fc 0x1d51f8000 + 16636
32 libdispatch.dylib 0x00000001d51ff58c 0x1d51f8000 + 30092
33 libdispatch.dylib 0x00000001d51febf4 0x1d51f8000 + 27636
34 libdispatch.dylib 0x00000001d520db2c 0x1d51f8000 + 88876
35 libdispatch.dylib 0x00000001d520e338 0x1d51f8000 + 90936
36 libsystem_pthread.dylib 0x00000002544b9dbc 0x2544b9000 + 3516
37 libsystem_pthread.dylib 0x00000002544b9b98 0x2544b9000 + 2968
i can't find the error code 134093 definition
i don't know what's going wrong in iOS16
Would anyone have a hint of why this could happen and how to resolve it?
thanks very much
I cannot find anything documentation re: isPrivacySensitiveAlbum. I've granted my app access to all photos. Not sure what else to try
Code that triggers the crash:
let options = PHFetchOptions()
options.fetchLimit = 1
let assetColl = PHAssetCollection.fetchAssetCollections(withLocalIdentifiers: [localId], options: options)
if assetColl.count > 0 {
if let asset = PHAsset.fetchKeyAssets(in: assetColl.firstObject!, options: options)
stack trace from here on
`2023-04-15 06:34:41.628537-0700 DPF[33615:6484880] -[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0
2023-04-15 06:34:41.632378-0700 DPF[33615:6484880] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0'
*** First throw call stack:
(
0 CoreFoundation 0x00007ff80045478b __exceptionPreprocess + 242
1 libobjc.A.dylib 0x00007ff80004db73 objc_exception_throw + 48
2 CoreFoundation 0x00007ff8004638c4 +[NSObject(NSObject) instanceMethodSignatureForSelector:] + 0
3 CoreFoundation 0x00007ff800458c66 ___forwarding___ + 1443
4 CoreFoundation 0x00007ff80045ae08 _CF_forwarding_prep_0 + 120
5 Photos 0x00007ff80b8480e1 +[PHAsset fetchKeyAssetsInAssetCollection:options:] + 86
6 DPF 0x0000000100791029 $s3DPF16AlbumListFetcherV22loadKeyImageForLocalIdySo7UIImageCSgSSYaFTY0_ + 569`
My app uses PHLivePhoto.request to generate live photos, but memory leaks if I use a custom targetSize.
PHLivePhoto.request(withResourceFileURLs: [imageUrl, videoUrl], placeholderImage: nil, targetSize: targetSize, contentMode: .aspectFit) {[weak self] (livePhoto, info) in
Change targetSize to CGSizeZero, problem resolved.
PHLivePhoto.request(withResourceFileURLs: [imageUrl, videoUrl], placeholderImage: nil, targetSize: CGSizeZero, contentMode: .aspectFit) {[weak self] (livePhoto, info) in
Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
Hello,
I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc
When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)"
In fact, SIGABORT signal is received in a thread dealing with "portrait effect"
Thread 19 Queue : com.apple.portrait.effect_init (serial)
Is it a known bug? Is there a workaround about this case?
Best regards
External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's)
Is it a known bug? Is there a workaround about this case?
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro?
My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs.
When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization.
self.capturePhotoOutput = AVCapturePhotoOutput.init()
self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera
, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: self.captureDevice!)
self.captureSession = AVCaptureSession()
self.captureSession?.beginConfiguration()
self.captureSession?.sessionPreset = .photo
self.captureSession?.addInput(input)
if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) {
captureSession?.addOutput(capturePhotoOutput!)
}
if let connection = capturePhotoOutput?.connection(with: .video) {
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .off
}
}
DispatchQueue.main.async { [self] in
self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.videoPreviewLayer?.videoGravity = .resizeAspectFill
self.videoPreviewLayer?.connection?.videoOrientation = .portrait
self.videoPreviewLayer?.frame = self.previewView.layer.frame
self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0)
}
self.captureSession?.commitConfiguration()
self.captureSession?.startRunning()
}
}
@objc private func handleTakePhoto(){
let photoSettings = AVCapturePhotoSettings()
if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType]
photoSettings.isAutoStillImageStabilizationEnabled = false
capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self)
}
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let dataImage = photo.fileDataRepresentation() {
print(UIImage(data: dataImage)?.size as Any)
let dataProvider = CGDataProvider(data: dataImage as CFData)
let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation))
} else {
print("some error here")
}
}
As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image.
In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added.
self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera
, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: self.captureDevice!)
self.captureSession = AVCaptureSession()
self.captureSession?.beginConfiguration()
self.captureSession?.sessionPreset = .photo
self.captureSession?.addInput(input)
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput?.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) {
captureSession?.addOutput(videoDataOutput!)
}
/* If I cancel the comment line, video stabilization is enabled.
if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) {
captureSession?.addOutput(capturePhotoOutput!)
}
*/
DispatchQueue.main.async { [self] in
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.videoPreviewLayer?.videoGravity = .resizeAspectFill
self.videoPreviewLayer?.connection?.videoOrientation = .portrait
self.videoPreviewLayer?.frame = self.previewView.layer.frame
self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0)
}
self.captureSession?.commitConfiguration()
self.captureSession?.startRunning()
}
}
@objc private func handleTakePhoto(){
takePicture = true
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !takePicture {
return //we have nothing to do with the image buffer
}
//try and get a CVImageBuffer out of the sample buffer
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer))
let ciImage = CIImage.init(cvImageBuffer: cvBuffer)
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciImage, from: rect)
guard cgImage != nil else {return }
let uiImage = UIImage(cgImage: cgImage!)
}
Dear Team,
I am trying to add contact from QRCode. But it seems that the built-in QR code reader of
iphone camera isn't able to decode the FullName with space containing in last name correctly
ex:-Collin A. Al Miller.
I have attached all the screenshot for your reference.
Here are the examples:
When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the
The full name its giving the empty result without the fullname.
The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde
2)When i try to removed the blank space and tried to add comma or - in the full name its getting
recognised and its working perfectly.
The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE
3)Both the full name are working perfectly in QR camera scanner of android
Collin A. Al-Miller or Collin A, Al Miller.
The attached screenshot name AndroidQRCODE
Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this
issue
Code to generate vcard
var str = "BEGIN:VCARD \n" +
"VERSION:2.1 \n" +
"FN:\("Collin A. Al Miller") \n" +
"TITLE:\("") \n"
if options.showPersonalPhone {
str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n")
str.append(contentsOf: "item1.X-ABLabel:Mobile\n")
}
if options.showWorkPhone {
str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n")
str.append(contentsOf: "item2.X-ABLabel:Work Phone\n")
}
if options.showEmail {
str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n")
str.append(contentsOf: "item3.X-ABLabel:Work Email\n")
}
if options.showWebsite {
str.append(contentsOf: "URL:www.test.com \n")
}
if options.showLocation {
str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n")
}
str.append(contentsOf: "END:VCARD")
Hi hope all are well!
We've been working on a live streaming app and it's going quite well!
Just got the aspect ratio locked as desired.
Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls.
We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go!
Is there any way to specify this?
Thanks in advance!
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode.
After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices.
The following devices are experiencing this issue.
iPad (7th Gen)
iPad (6th Gen)
iPad Pro (10.5)
iPad Pro (12.9 2nd Gen)
This issue has not occur on any other devices I have.
This may only occur on devices with model number "iPad7,x".
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs.
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after iPadOS17.4?
Or is there some problem on the OS side?
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances.
Previously, the following steps were sufficient to get 48 MP capture from AVFoundation:
Configuration
Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size
Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048)
Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality
Taking a photo
Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048)
Set AVCapturePhotoSettings.photoQualityPrioritization to .quality
As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4.
Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
I want to take 48MP photos and get the same iso and exposure duration as I set.
Configuration
Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size
Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048)
Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; }
Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler
Taking a photo
Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048)
The API discussion of setExposureModeCustomWithDuration told me
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/
To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed.
But at last step, when I set
AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed,
the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set.
and when I set
AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set.
What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
How to programmatically open the camera the spatial mode in iOS for capturing the spatial videos. Any API for opening camera in spatial mode?
Hello, I fetch Live Photo AVAsset using: PHImageManager and PHAssetResourceManager for getting Data. And then I want to wrap it to AVAsset using fileURL, and everything works fine, but also I want to trim this AVAsset using AVMutableComposition. I use insertTimeRange of AVMutableCompositionTrack method, and I don't not why but naturalSize of originalVideoTrack and newVideoTrack are different, and this happening only with Live Photo, default Videos work fine. Seems like this is AVMutableCompositionTrack bug inside AVFoundation. Please give me some info. Thanks)
PHImageManager.default().requestLivePhoto(
for: phAsset,
targetSize: size,
contentMode: .aspectFill,
options: livePhotoOptions
) { [weak self] livePhoto, info in
guard let livePhoto else {
return
}
self?.writeAVAsset(livePhoto: livePhoto, fileURL: fileURL)
}
private func writeAVAsset(livePhoto: PHLivePhoto, fileURL: URL) {
let resources = PHAssetResource.assetResources(for: livePhoto)
guard let videoResource = resources.first(where: { $0.type == .pairedVideo }) else {
return
}
let requestOptions = PHAssetResourceRequestOptions()
var data = Data()
dataRequestID = PHAssetResourceManager.default().requestData(
for: videoResource,
options: requestOptions,
dataReceivedHandler: { chunk in
data.append(chunk)
},
completionHandler: { [weak self] error in
try data.write(to: fileURL)
let avAsset = AVAsset(url: fileURL)
let composition = AVMutableComposition()
if let originalVideoTrack = tracks(withMediaType: .video).first,
let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: 0)
{
// originalVideoTrack has naturalSize (width: 1744, height: 1308)
try? videoTrack.insertTimeRange(timeRange, of: originalVideoTrack, at: .zero)
videoTrack.preferredTransform = originalVideoTrack.preferredTransform
// videoTrack has naturalSize (width: 1920.0, height: 1440.0)
}
)
}
I am trying to implement the ability to save a photo to the user’s photo library on macOS. When I call PHPhotoLibrary.requestAuthorization(for: .addOnly) I just get a .denied status. The user is not prompted for access.
I tried adding these entitlements: com.apple.security.personal-information.photos-library, com.apple.security.assets.pictures.read-write. I tried turning off sandboxing entirely.
I tried saving despite getting the authorization denied, but unsurprisingly that gives me this error: Domain=PHPhotosErrorDomain Code=3311
I can almost do what i want with NSSharingService(named: .addToIPhoto), but that has the side effect of launching Photos.
Is there a trick to getting PHPhotoLibrary.requestAuthorization(for: .addOnly) to work?
Thanks.
John
The app crashes when creating a new album. This crash did not occur in our own testing, but after publishing it to the app store, it seems that the probability of occurrence is very high.
Hey!
I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image.
But I'm having some issues with it, this is my code:
struct ContentView: View {
@Environment(\.presentationMode) var presentationMode
var body: some View {
ZStack {
VStack {
Image("logo")
.resizable()
.frame(width: 337, height: 211)
.accessibilityHidden(true)
Text("My first Vision Pro app.")
.multilineTextAlignment(.center)
.font(.headline)
.frame(width: 340)
.padding(.bottom, 10)
Button {
// Add camera functionality here
} label: {
Text("Open Camera")
.frame(maxWidth: .infinity)
}
.onAppear {
requestCameraAccess()
}
.onTapGesture {
// Check if camera permission is granted
if AVCaptureDevice.authorizationStatus(for: .video) == .authorized {
openFrontCamera()
} else {
requestCameraAccess()
}
}
}
}
}
func requestCameraAccess() {
AVCaptureDevice.requestAccess(for: .video) { authorized in
DispatchQueue.main.async {
if authorized {
// Permission granted, open camera if needed
openFrontCamera()
} else {
// Handle permission denied case (optional)
}
}
}
}
func openFrontCamera() {
}
}```
On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it.
Has anyone been able to work this out?