Hi,
I have a usecase where I'd like to handle and prevent automatic retries whenever certain errors occur during FairPlay content key requests.
Here's the current flow:
FairPlay certificate is requested and obtained from my server
makeStreamingContentKeyRequestData is called on the keyRequest
The license server will return a 403 along with a body response containing a json with the detailed code and message
The error is caught and handled properly by calling AVContentKeyRequest.processContentKeyResponseError
The AVContentKeySession automatically retries up to 8 times by providing a new key request through public func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)
My license server gets hit with 8 requests that will always result in a 403, these retries are useless
My custom error is succesfully caught later down the line through AVPlayerItem.observe(\.status), this is great
Thing is.. I'd like to catch the 403 error and prevent any retry from being made at step 5, ideally through
public func contentKeySession(_ session: AVContentKeySession, contentKeyRequest keyRequest: AVContentKeyRequest, didFailWithError err: Error)
I've looked for quite a while and just can't seem to find any way of achieving this. Is this not supported at all?
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
115 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions.
There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step.
However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented?
Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator?
If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property.
Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats.
I suspect I may be missing something obvious. Any help would be greatly appreciated!
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
Issue 1: Performing I/O on the main thread can cause slow launches.
Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
// Issue 1 (Performing I/O on the main thread can cause slow launches.)
cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
//Issue 1: Performing I/O on the main thread can cause slow launches.
locationManager = CLLocationManager()
locationAuthorization = locationManager.authorizationStatus
super.init()
//Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
locationManager.delegate = self
}
And also in route Change notification handler of AVAudioSession.routeChangeNotification,
//Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays.
let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Core Location
AVFoundation
Concurrency
Xcode Sanitizers and Runtime Issues
So get a swift file and put this in it
import Foundation
import AVFoundation
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: "Hello, testing speech synthesis on macOS.")
if let voice = AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-GB.Daniel") {
utterance.voice = voice
print("Using voice: \(voice.name), \(voice.language)")
} else {
print("Daniel voice not found on macOS.")
}
synthesizer.speak(utterance)
I get no speech output and this log output
Error reading languages in for local resources.
Error reading languages in for local resources.
Using voice: Daniel, en-GB
Program ended with exit code: 0
Why? and whats with "Error reading languages in for local resources." ?
Hi, Im working on a app with a infinite scrollable video similar to Tiktok or instagram reels. I initially thought it would be a good idea to cache videos in the file system but after reading this post it seems like it is not recommended to cache videos on the file system: https://forums.developer.apple.com/forums/thread/649810#:~:text=If%20the%20videos%20can%20be%20reasonably%20cached%20in%20RAM%20then%20we%20would%20recommend%20that.%20Regularly%20caching%20video%20to%20disk%20contributes%20to%20NAND%20wear
The reason I am hesitant to cache videos to memory is because this will add up pretty quickly and increase memory pressure for my app.
After seeing the amount of documents and data storage that instagram stores, its obvious they are caching videos on the file system. So I was wondering what is the updated best practice for caching for these kind of apps?
Issue Description
When playing certain MIDI files using AVMIDIPlayer, the initial volume settings for individual tracks are being ignored during the first playback. This results in all tracks playing at the same volume level, regardless of their specified volume settings in the MIDI file.
Steps to Reproduce
Load a MIDI file that contains different volume settings for multiple tracks
Start playback using AVMIDIPlayer
Observe that all tracks play at the same volume level, ignoring their individual volume settings
Current Behavior
All tracks play at the same volume level during initial playback
Track volume settings specified in the MIDI file are not being respected
This behavior consistently occurs on first playback of affected MIDI files
Expected Behavior
Each track should play at its specified volume level from the beginning
Volume settings in the MIDI file should be respected from the first playback
Workaround
I discovered that the correct volume settings can be restored by:
Starting playback of the MIDI file
Setting the currentPosition property to (current time - 1 second)
After this operation, all tracks play at their intended volume levels
However, this is not an ideal solution as it requires manual intervention and may affect the playback experience.
Questions
Is there a way to ensure the track volume settings are respected during the initial playback?
Is this a known issue with AVMIDIPlayer?
Are there any configuration settings or alternative approaches that could resolve this issue?
Technical Details
iOS Version: 18.1.1 (22B91)
Xcode Version: 16.1 (16B40)
Hi,
I test AVMIDIPlayer in order to replace classes written based on AVAudioEngine with callbacks functions sending MIDI events
to test, I use an NSMutableData filled with:
the MIDI header
a track for time signature
a track containing a few midi events.
I then create an instance of the AVMIDIPlayer using the data
Everything works fine for some instrument (00 … 20) or 90 but not for other instruments 60, 70, …
The MiDI header and the time signature track are based on the MIDI.org sample,
https://midi.org/standard-midi-files-specification
RP-001_v1-0_Standard_MIDI_Files_Specification_96-1-4.pdf
the midi events are:
UInt8 trkEvents[] = {
0x00, 0xC0, instrument, // Tubular bell
0x00, 0x90, 0x4C, 0xA0, // Note 4C
0x81, 0x40, 0x48, 0xB0, // TS + Note 48
0x00, 0xFF, 0x2F, 0x00}; // End
for (UInt8 i=0; i<3; i++) {
printf("0x%X ", trkEvents[i]);
}
printf("\n");
[_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)];
A template application is used to change the instrument in a NSTextField
I was wondering if specifics are required for some instruments?
The interface header:
#import <AVFoundation/AVFoundation.h>
NS_ASSUME_NONNULL_BEGIN
@interface TestMIDIPlayer : NSObject
@property (retain) NSMutableData *midiTempData;
@property (retain) NSURL *midiTempURL;
@property (retain) AVMIDIPlayer *midiPlayer;
- (void)createTest:(UInt8)instrument;
@end
NS_ASSUME_NONNULL_END
The implementation:
#pragma mark -
typedef struct _MThd {
char magic[4]; // = "MThd"
UInt8 headerSize[4]; // 4 Bytes, MSB first. Always = 00 00 00 06
UInt8 format[2]; // 16 bit, MSB first. 0; 1; 2 Use 1
UInt8 trackCount[2]; // 16 bit, MSB first.
UInt8 division[2]; //
}MThd;
MThd MThdMake(void);
void MThdPrint(MThd *mthd) ;
typedef struct _MIDITrackHeader {
char magic[4]; // = "MTrk"
UInt8 trackLength[4]; // Ignore, because it is occasionally wrong.
} Track;
Track TrackMake(void);
void TrackPrint(Track *track) ;
#pragma mark - C Functions
MThd MThdMake(void) {
MThd mthd = {
"MThd",
{0, 0, 0, 6},
{0, 1},
{0, 0},
{0, 0}
};
MThdPrint(&mthd);
return mthd;
}
void MThdPrint(MThd *mthd) {
char *ptr = (char *)mthd;
for (int i=0;i<sizeof(MThd); i++, ptr++) {
printf("%X", *ptr);
}
printf("\n");
}
Track TrackMake(void) {
Track track = {
"MTrk",
{0, 0, 0, 0}
};
TrackPrint(&track);
return track;
}
void TrackPrint(Track *track) {
char *ptr = (char *)track;
for (int i=0;i<sizeof(Track); i++, ptr++) {
printf("%X", *ptr);
}
printf("\n");
}
@implementation TestMIDIPlayer
- (id)init {
self = [super init];
printf("%s %p\n", __FUNCTION__, self);
if (self) {
_midiTempData = nil;
_midiTempURL = [[NSURL alloc]initFileURLWithPath:@"midiTempUrl.mid"];
_midiPlayer = nil;
[self createTest:0x0E];
NSLog(@"_midiTempData:%@", _midiTempData);
}
return self;
}
- (void)dealloc {
[_midiTempData release];
[_midiTempURL release];
[_midiPlayer release];
[super dealloc];
}
- (void)createTest:(UInt8)instrument {
/* MIDI Header */
[_midiTempData release];
_midiTempData = nil;
_midiTempData = [[NSMutableData alloc]initWithCapacity:1024];
MThd mthd = MThdMake();
MThd *ptrMthd = &mthd;
ptrMthd->trackCount[1] = 2;
ptrMthd->division[1] = 0x60;
MThdPrint(ptrMthd);
[_midiTempData appendBytes:ptrMthd length:sizeof(MThd)];
/* Track Header
Time signature */
Track track = TrackMake();
Track *ptrTrack = &track;
ptrTrack->trackLength[3] = 0x14;
[_midiTempData appendBytes:ptrTrack length:sizeof(track)];
UInt8 trkEventsTS[]= {
0x00, 0xFF, 0x58, 0x04, 0x04, 0x04, 0x18, 0x08, // Time signature 4/4; 18; 08
0x00, 0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20, // tempo 0x7A120 = 500000
0x83, 0x00, 0xFF, 0x2F, 0x00 }; // End
[_midiTempData appendBytes:trkEventsTS length:sizeof(trkEventsTS)];
/* Track Header
Track events */
ptrTrack->trackLength[3] = 0x0F;
[_midiTempData appendBytes:ptrTrack length:sizeof(track)];
UInt8 trkEvents[] = {
0x00, 0xC0, instrument, // Tubular bell
0x00, 0x90, 0x4C, 0xA0, // Note 4C
0x81, 0x40, 0x48, 0xB0, // TS + Note 48
0x00, 0xFF, 0x2F, 0x00}; // End
for (UInt8 i=0; i<3; i++) {
printf("0x%X ", trkEvents[i]);
}
printf("\n");
[_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)];
[_midiTempData writeToURL:_midiTempURL atomically:YES];
dispatch_async(dispatch_get_main_queue(), ^{
if (!_midiPlayer.isPlaying)
[self midiPlay];
});
}
- (void)midiPlay {
NSError *error = nil;
_midiPlayer = [[AVMIDIPlayer alloc]initWithData:_midiTempData soundBankURL:nil error:&error];
if (_midiPlayer) {
[_midiPlayer prepareToPlay];
[_midiPlayer play:^{
printf("Midi Player ended\n");
[_midiPlayer stop];
[_midiPlayer release];
_midiPlayer = nil;
}];
}
}
@end
Call from AppDelegate
- (IBAction)actionInstrument:(NSTextField*)sender {
[_testMidiplayer createTest:(UInt8)sender.intValue];
}
Hello,
I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter?
Since there is no way for me to know when the system has finished processing a photo.
If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Camera
Photos and Imaging
PhotoKit
AVFoundation
I found this phenomenon, and it can be reproduced stably.
If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms.
If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later.
Let me describe the problem I encountered in another way.
Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6...
Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter.
If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9.
This means the triple camera can capture photos faster, but why is this the case? Any explanation?
Hello,
To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video.
I obtained the output from the source video in the audio and video tracks as follows:
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!,
outputSettings: audioSettings)
// Video
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoTrack!.naturalSize.width,
kCVPixelBufferHeightKey: videoTrack!.naturalSize.height
] as [String: Any]
var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings)
With this, I'm obtaining the
CMSampleBuffer
using
AVAssetReader.copyNextSampleBuffer
.
How can I add it to the destination video?
Should I use a while loop, considering I already have the
AVAssetWriter
set up?
Something like this:
while let buffer = videoOutput.copyNextSampleBuffer() {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let frame = imgBuffer as CVPixelBuffer
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
adaptor.append(frame, withMediaTime: time)
}
}
Lastly, regarding the destination video.
How should the
AVAssetWriterInput
for audio and PixelBuffer of the destination video be set up?
Provide an example, something like:
let audioSettings = […] as [String: Any]
Looking forward to your response.
Hello, I am developing a service using capture sessions. I have a question concerning something curious I've noticed. Occasionally, I've been informed that the capture session stops working. Upon investigation, I found records of the videoDeviceNotAvailableWithMultipleForegroundApps interruption on the devices.
From what I've looked up in the documents, it seems to occur due to multitasking capabilities, but I'm wondering if there are any specific scenarios where this happens on iPhone devices? Here is the relevant documentation link: https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps
I suspect it might have something to do with Picture-in-Picture (PIP) mode, but when I developed and tested a direct video streaming PIP, the issue did not occur. Does anyone have insights on this matter or related experiences they could share?
I don't know what triggered this in a previously-running application I'm developing: When I have the build target set to "My Mac (designed for iPad)," I now must delete all the app's build materials under DerivedData to get the app to build and run exactly once. Cleaning isn't enough; I have to delete everything.
On second launch, it will crash without even getting to the instantiation of the application class. None of my code executes.
Also: If I then set my iPhone as the build target, the app will build and run repeatedly. If I then return to "My Mac (designed for iPad)," the app will again launch once and then crash on every subsequent launch.
The crash is the same every time:
dyld[3875]: Symbol not found: _OBJC_CLASS_$_AVPlayerView
Referenced from: <D566512D-CAB4-3EA6-9B87-DBD15C6E71B3> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/Library/Debugger/libViewDebuggerSupport.dylib
Expected in: <4C34313C-03AD-32EB-8722-8A77C64AB959> /System/iOSSupport/System/Library/Frameworks/AVKit.framework/Versions/A/AVKit
Interestingly, I haven't found any similar online reports that mention this symbol.
Has anyone seen this behavior before, where the crash only happens after the first run... and gets reset when you toggle the target type?
Hello Apple Engineers,
Specific Issue:
I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far:
I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec.
The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log.
However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device."
This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding.
Here are my questions:
Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)?
There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation?
If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated.
Thank you for your help!
In my iOS App I present a QLPreviewController where I want to display a locally stored Video from the iPhone's document directory.
let previewController = QLPreviewController()
previewController.dataSource = self
self.present(previewController, animated: true, completion: nil)
func previewController(_ controller: QLPreviewController, previewItemAt index: Int) -> QLPreviewItem {
let url = urlForPreview
return url! as QLPreviewItem
}
This seems to work fine for all but one of my testflight users. He is using an iPhone 12 with iOS18.0.1. The screen becomes unresponsive. He cannot pause the video, share it or close the QLPreviewController.
In his logfile I see the following error...
[AVAssetTrack loadValuesAsynchronouslyForKeys:completionHandler:] invoked with unrecognized keys (
"currentVideoTrack.preferredTransform")
Any ideas?.
In the docs it looks like something allows you to set a background replacement image in the Control Center controls (like on a Mac).
However, I can't find any documentation on it, beyond this reference in the Apple docs. https://developer.apple.com/documentation/avfoundation/avcapturedevice/isbackgroundreplacementactive?language=objc
Does anyone have any advice to enable backgrounds in the camera system wide?
Title: Unable to Access Microphone in Control Center Widget – Is It Possible?
Hello everyone,
I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself.
Here's the code I'm using in the widget:
swift
Copy code
func startRecording() async {
logger.info("Starting recording...")
print("Starting recording...")
recognizedText = ""
isFinishingRecognition = false
// First, check speech recognition authorization
let speechAuthStatus = await withCheckedContinuation { continuation in
SFSpeechRecognizer.requestAuthorization { status in
continuation.resume(returning: status)
}
}
guard speechAuthStatus == .authorized else {
logger.error("Speech recognition not authorized")
return
}
// Then, request microphone permission using our manager
let micPermission = await AudioSessionManager.shared.requestMicrophonePermission()
guard micPermission else {
logger.error("Microphone permission denied")
print("Microphone permission denied")
return
}
// Continue with recording...
}
Issues:
The code consistently prints "Microphone permission denied" when run from the widget.
Microphone access works without issues when the same code is executed from within the app.
Questions:
Is it possible for a Control Center widget to access the microphone?
If yes, what might be causing the "Microphone permission denied" error in the widget?
Are there additional permissions or configurations required to enable microphone access in a widget?
Any insights or suggestions would be greatly appreciated!
Thank you.
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution.
I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less.
When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one.
I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth
My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one.
How can I solve this problem.
For an upcoming update of one of my apps, I’m facing an issue:
The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch.
However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32).
Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down??
I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result.
Grateful for any input on this 🙏
Hi. I encounter some random crashes of my camera app. After some investigations, I found that it's terminated by the system and the crash log did be generated but the information is not quite useful, and here is the log found via the Console app.
Termination & Crash log
"Camera not actively used; AVCaptureEventInteraction not installed":
Received termination request from [osservice<com.apple.SpringBoard>:10931] on <RBSProcessPredicate <RBSProcessInstancePredicate| [app<com.juniperphoton.PhotonCam]>> with context <RBSTerminateContext| explanation:Capture Application Requirements Unmet: "Camera not actively used; AVCaptureEventInteraction not installed" reportType:CrashLog maxTerminationResistance:Interactive>
The crash log exported from the device will have some common information like:
It's a EXC_CRASH (SIGKILL) type with no termination reason.
Exception Type: EXC_CRASH (SIGKILL)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: RUNNINGBOARD 0
It's triggered by the main thread, but it seems to be waiting for an event to process.
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x1ee165788 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x1ee168e98 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x1ee168db0 mach_msg_overwrite + 424
3 libsystem_kernel.dylib 0x1ee168bfc mach_msg + 24
4 CoreFoundation 0x19cbe47f4 __CFRunLoopServiceMachPort + 160
5 CoreFoundation 0x19cbe3ea0 __CFRunLoopRun + 1212
6 CoreFoundation 0x19cc36274 CFRunLoopRunSpecific + 588
7 GraphicsServices 0x1e9d6d4c0 GSEventRunModal + 164
8 UIKitCore 0x19f783480 -[UIApplication _run] + 816
9 UIKitCore 0x19f3a9410 UIApplicationMain + 340
10 UIKitCore 0x19fae4bb0 0x19f394000 + 7670704
11 PhotonCam 0x1002e7e3c 0x1002cc000 + 114236
12 dyld 0x1c2d5ade8 start + 2724
Address size fault on the main thread
Thread 0 crashed with ARM Thread State (64-bit):
...
far: 0x0000000000000000 esr: 0x56000080 Address size fault
I have once tried to reproduce this issue when the app is attached with debugger, and it says:
Terminated due to signal 9
When the crash or termination happened, the app:
No AVCaptureSession is running.
The app is in the foreground and users are interacting with some functions like viewing photos or editing photos in the app. When users exit the camera view, like entering the gallery or settings, the camera session will be stopped.
Both TestFlight and Debug build will have the same issue.
No 3rd party crash reporter is installed(I deliberately disable it in Debug build and TestFlight build)
It has adopted the LockedCameraCapture, but current it's running on the main app target(if not, my app will have a button of unlock, so I can confirm about this).
Also, when it comes to the memory consumption, there is no JetsamEvent around the crash time.
Device and app information
Additionally, some information about the tech stack and the current state of my device and my app:
iPhone 16 Pro with iOS 18.2 Beta 3.
The app is a camera based app(it's PhotonCam and you can find it on the App Store), its main functionality is the camera feature using AVFoundation + Core Image + Metal to deliver camera functionality.
It has adopted the Camera Control, AVCaptureEventInteraction and LockedCameraCapture features.
If I remember it right, it occurs in iOS 18.1 Release build, but currently I have no such device to confirm. But in iOS 17.x the issue has never happened.
Regarding to this termination, on top of my head is the "watchdog" mechanism that will terminate the process that is running on the LockedCameraCapture feature. However I can make sure that currently the app is running as the main target on the home screen.
Has anybody encountered this kind of issue and has found some solutions? Thanks in advance.
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it.
Here's a simplified version of my tracking logic:
func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws {
guard let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) else {
fatalError("Camera not available")
}
let currentZoomFactor = device.videoZoomFactor
let dimensions = device.activeFormat.formatDescription.dimensions
let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor))
let deviceOrientation = UIDevice.current.orientation
let cameraOrientation: DockAccessory.CameraOrientation = {
switch deviceOrientation {
case .landscapeLeft: return .landscapeLeft
case .landscapeRight: return .landscapeRight
case .portrait: return .portrait
case .portraitUpsideDown: return .portraitUpsideDown
default: return .unknown
}
}()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: input.device.deviceType,
cameraPosition: input.device.position,
orientation: cameraOrientation,
cameraIntrinsics: useIntrinsics ? intrinsics : nil,
referenceDimensions: referenceDimensions
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: boundingBox
)
let observations = [observation]
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
}
func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 {
let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription)
let width = Float(dimensions.width)
let height = Float(dimensions.height)
let diagonalPixels = sqrt(width * width + height * height)
let estimatedFocalLength = diagonalPixels * 0.8
let fx = Float(estimatedFocalLength) * Float(currentZoom)
let fy = fx
let cx = width / 2.0
let cy = height / 2.0
return matrix_float3x3(
SIMD3<Float>(fx, 0, cx),
SIMD3<Float>(0, fy, cy),
SIMD3<Float>(0, 0, 1)
)
}
I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases.
Questions:
Why might increasing the zoom factor cause this erratic movement?
I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently?
Are there any other factors I should consider when using DockKit with a variable zoom?
Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency?
Any insights or suggestions would be greatly appreciated. Thanks!