Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
1
462
1d
Changing Frame Rate of External Display on iPad
Hello, As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps. I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps. I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes: ‘Support for HDMI Monitoring at Sensor Rate and Resolution’ This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented? Thanks!
3
0
415
2d
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
245
3d
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
0
0
219
4d
FPS not able to do > 30 fps for Pro and ProMax models only
We’re developing an AVFoundation-based video recording app (4K @ 60 fps required for biomechanical analysis). On most devices this works perfectly (iPhone 12/14/15/16 non-Pro models), but on several iPhone Pro models (12 Pro, 13 Pro, 14 Pro, 15 Pro/Pro Max), we consistently get 4K 30 fps recordings—even when the device should support 4K 60 fps on the wide-angle camera. What we observe We configure the session for .hd4K3840x2160. We iterate through AVCaptureDevice.formats and select formats that: have 3840×2160 resolution support ≥60 fps (videoSupportedFrameRateRanges) On some Pro devices, this format search returns no results, even though: The Camera app records 4K60 fine. External references list the wide camera as 4K60 capable. The fallback becomes the device's default 4K30 format, so final files are 3840×2160 @ 30 fps. This happens immediately on app launch (not after heating), so not thermal-related. What we’ve tried Force selecting .builtInWideAngleCamera instead of dual/triple cameras. Disabling HDR (videoHDREnabled = false). Disabling low-light boost. Allowing 59.94 fps formats (in case exact 60.0 isn’t exposed). Logging all videoSupportedFrameRateRanges per format. What we’re seeing in logs On affected Pro devices, the capture device reports only 4K formats with maxFrameRate ≈ 30 fps, despite the hardware being able to do 4K60. Main question Has anyone encountered cases where 4K60 formats are available in the Camera app but not exposed through AVFoundation, especially on Pro models or multi-camera devices? Could HEVC/HDR capability or multi-camera constraints be preventing certain formats from appearing? Are there known conditions where 4K60 formats are hidden unless specific device configuration is applied? Any guidance on reliably locking 4K60 on iPhone Pro models via AVFoundation would be hugely appreciated.
0
0
228
4d
How to Implement Screen Mirroring in iOS for Google TV?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device. What I Have Tried So Far I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings: Google Cast SDK: Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time. ReplayKit for Screen Capture: RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant. RTSP/UDP Streaming: Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult. My Questions: Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
1
1
575
4d
VTLowLatencyFrameInterpolationConfiguration supported dimensions
Is there limits on the supported dimension for VTLowLatencyFrameInterpolationConfiguration. Querying VTLowLatencyFrameInterpolationConfiguration.maximumDimensions and VTLowLatencyFrameInterpolationConfiguration.minimumDimensions returns nil. When I try the WWDC sample project EnhancingYourAppWithMachineLearningBasedVideoEffects with a 4k video this statement try frameProcessor.startSession(configuration: configuration) executes but try await frameProcessor.process(parameters: parameters) throws error Error Domain=VTFrameProcessorErrorDomain Code=-19730 "Processor is not initialized" UserInfo={NSLocalizedDescription=Processor is not initialized}. Also, why is VTLowLatencyFrameInterpolationConfiguration able to run while app is backgrounded but VTFrameRateConversionParameters can't (due to gpu usage)?
2
0
276
5d
VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
2
0
167
5d
AVPlayer loading performance problem in iOS 26
Hi, I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections. I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components. The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl. It happens even if use the same video file cached on disk for each cell for testing. I also started getting this kind of log messages after the players are deinitialized: <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107 <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095 <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095 There's also another log message that I see occasionally, but I don't know what triggers it. << FigXPC >> signalled err=-16152 at <>:1683 Is there anyone else that experienced this kind of problem with the latest release? Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because: 1- There will be 100 player instance in memory at all times. 2- There will still be stuttering during the initial loading of the videos from the web. Any help is appreciated!
1
0
306
5d
Query regarding migrating from Fairplay Streaming SDK version 4.5.4 to version 26
Our license service is based on version 4.5.4 and we make use of sample .c/.h files for building license service. We are told that version 4.5.4 is going to be deprecated in 2026 and we should migrate to latest SDK version 26. When explored the SDK, we noticed that only python and Swift based SDk is provided. Does Apple also provide C/C++ based SDK as it is going to easier for us to integrate. If yes, please share the SDK package and sample license service solution.
1
0
68
5d
Are there known cases where DepthData is empty while Face ID is working?
We are experiencing an issue related to DepthData from the TrueDepth camera on a specific device. On December 1, we tested with the complainant’s device iPhone 14 / iOS 26.0.1, and observed that the depth image is received with empty values. However, the same implementation works normally on iPhone 17 Pro Max (iOS 26.1) and iPhone 13 Pro Max (iOS 26.0.1), where depth data is delivered correctly. In the problematic case: TrueDepth camera is active Face ID works normally The app receives a DepthData object, but all values are empty (0), not nil Because the DepthData object is not nil, this makes it difficult to detect the issue through software fallback handling. We developed the feature with reference to the following Apple sample: https://developer.apple.com/documentation/AVFoundation/streaming-depth-data-from-the-truedepth-camera We would like to ask: Are there known cases where Face ID functions normally but DepthData from the TrueDepth camera is returned as empty values? If so, is there a recommended approach for identifying or handling this situation? Any guidance from Apple engineers or the community would be greatly appreciated. Thank you.
0
0
115
6d
Playing Apple Fairplay encrypted content on iOS 26
For iOS17 we've had no problem playing Apple Fairplay encrypted content with keys delivered from our key server running on FairPlay Streaming Server SDK 5.1 and subsequently FairPlay Streaming Server SDK 26. It's built and deployed using Xcode Version 26.1.1 (17B100) with no changes to the code and - as expected - the content continued to be successfully decrypted and played (so far so good). However, as soon as a device was updated to iOS26, that device would no longer play the encrypted content. Devices remaining on iOS17 continue to work normally and the debugging logs are a sanity-check that proves that. Is anyone else experiencing this issue? Here's the code (you should be able to drop it into a fresh iOS Xcode project and provide a server url, content url and certificate).
0
0
117
6d
HLS CMAF playlist with multiple DRM changes
Hi Is it possible to have a playlist where I have a indication of a stream in clear, but then, someone started a DRM encrypted period and then someone turns it off. Can I just do the following (I've removed the video segments part, I'm just interested in the parts where I want notify the new drm region )? #EXT-X-MAP:URI="video_2_10000000_t17586401730000000_init.mp4" #EXT-X-KEY:METHOD=NONE ... #EXT-X-MAP:URI="video_2_10000000_t17587374640000000_init.mp4" #EXT-X-KEY:METHOD=SAMPLE-AES,URI="skd://5df0b36ac4bb4d0ff954a73b502ac332",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1" ... #EXT-X-MAP:URI="video_2_10000000_t17587376740000000_init.mp4?" #EXT-X-KEY:METHOD=NONE Should I insert discontinuity tags or something else? Right now what I can observe is that I got some audio drops when I try to do this.
1
0
258
6d
AVPlayer: escaped characters displayed wrong from WebVTT subtitles (HLS)
quotes are displayed incorrectly in subtitles of AVPlayerViewController when streaming VOD content using HLS. single quote ' (escaped ') is displayed as apos; double quotes " (escaped ") is displayed as quot; following the vtt specification. The same stream works fine in VLC player, showing quotes correctly in subtitles. subtitle vtt files use Content-Type: text/vtt WEBVTT X-TIMESTAMP-MAP=LOCAL:490014:06:04.000,MPEGTS:158764568760056 example line: 490014:05:46.000 --> 490014:05:50.440 align:start line:83% position:14% and the playlist has: #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",LANGUAGE="da",NAME="Dansk",AUTOSELECT=YES,CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog,public.accessibility.describes-music-and-sound",URI="subs/dan_5/playlist.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=780000,CODECS="mp4a.40.5,avc1.42c01e",RESOLUTION=256x144,AUDIO="audio-aac",SUBTITLES="subs" lære dig endnu bedre at kende." adding 'wvtt' to CODECS list in playlist does not make a difference. Is this a known bug? Is there a workaround? I guess the AVResourceLoaderDelegate can be used to intercept and parse the subtitle files, but it seems like quite a hack and not really intended to be used for this.
2
0
90
6d
When does AVPlayer exclude _HLS_part query param in LL-HLS requests?
Hello, I'm investigating an issue with LL-HLS playback using AVPlayer, specifically during DVR Live seeking (seeking to a past time). I noticed that in certain seeking scenarios, AVPlayer sends a Blocking Playlist Reload request that includes the _HLS_msn parameter but is missing the _HLS_part parameter. While I understand this is compliant with the HLS spec, I would like to know the specific criteria AVPlayer uses to decide when to drop the _HLS_part parameter. Does AVPlayer intentionally omit the part info when it determines that loading a specific partial segment is unnecessary during a seek operation? Clarification on this behavior would help us greatly in debugging our stream delivery. Thanks in advance.
1
0
85
6d
Logged error/warning in FigCaptureSourceRemote when capturing a photo
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos. I've modified it slightly, and I'm using an older version. When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages: <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing. I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this. Does anyone know what can cause this error Is there a way I can see the source code to figure out if there's something I'm doing wrong here? It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
0
0
337
1w
AVCapturePhotoOutput crashes at delegate callback on MacOS 13.7.5
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback. The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs. The relevant capture code is: func capturePhoto() { let captureSettings = AVCapturePhotoSettings() captureSettings.flashMode = .auto photoOutput.maxPhotoQualityPrioritization = .quality photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared) print("**** CameraManager: capturePhoto") } and the delegate callbacks are: class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate { nonisolated(unsafe) static let shared = PhotoDelegate() // MARK: - Delegate callbacks func photoOutput( _ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)? ) { print("**** CameraManager: didFinishProcessingPhoto") guard let pData = photo.fileDataRepresentation() else { print("**** photoOutput is empty") return } print("**** photoOutput data is \(pData.count) bytes") } func photoOutput( _ output: AVCapturePhotoOutput, willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings ) { print("**** CameraManager: willBeginCaptureFor") } func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { print("**** CameraManager: willCaptureCapturePhotoFor") } } The crash report significant parts are..... Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 Exception Codes: 0x0000000000000001, 0x0000000000000000 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [30850] VM Region Info: 0 is not in any region. Bytes before following region: 4296495104 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> __TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera Thread 0:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10 1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78 2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692 3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19 4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145 5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365 6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560 7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292 8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657 9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64 10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858 11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214 12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586 13 AppKit 0x7ff806c6b111 NSApplicationMain + 817 14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419 15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164 16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047 17 ContinuityCamera 0x10017b49e 0x100175000 + 25758 18 dyld 0x7ff8037d1418 start + 1896 Thread 1: 0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0 Thread 2: 0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0 Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext 0 ??? 0x0 ??? 1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61 2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498 3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36 4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69 5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8 6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387 7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393 8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417 9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765 10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327 11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15 Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible. Any thoughts on solving this situation would be appreciated. Regards, Michaela
1
0
418
1w