Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I got code of CMIO CameraExtension by Xcode target and it is running with FaceTime. I guess this kind of Extension has lots of security limitation.
I like to run command like "netstat" in Extension. Is that possible to call Process.run()? I got keep getting error like "The file zsh doesn’t exist". Same code with Process.run() worked in macOS app.
I like to run DistributedNotificationCenter and send text from App to CameraExtension. Is that possible? I do not receive any message on CameraExtension.
If there is any other IPC method between macOS app and CameraExtension, please let me know.
I tried the same code on ios17 and ios16 when enable address sanitizer, ios17 will crash, why?
Can anyone help me?
AudioComponent comp = {0};
AudioComponentDescription compDesc = {0};
compDesc.componentType = kAudioUnitType_Output;
compDesc.componentSubType = kAudioUnitSubType_RemoteIO;
compDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
compDesc.componentFlags = 0;
compDesc.componentFlagsMask = 0;
comp = AudioComponentFindNext(NULL, &compDesc);
if (comp == NULL)
{
assert(false);
}
AudioUnit tempAudioUnit;
osResult = AudioComponentInstanceNew(comp, &tempAudioUnit);
if (osResult != noErr)
{
assert(false);
}
Hello,
I am deaf-blind and I program with a braille display.
Currently, I am experiencing a difficulty with one of my APPs.
Basically, I'm converting
AVAudioPCMBuffer
for
CMSampleBuffer
and so far so good.
I want to add several
CMSampleBuffer
in a video
written with
AVAssetWrite
.
The problem is that I can only add up to more or less
2 thousands
CMSampleBuffer
.
I'm trying to create a video.
In this video, I put photos that are in an array and then I put audio from
CMSampleBuffer.
But I can't add many
CMSampleBuffer and only goes up to
2 thousand something.
I do not know what else to do.
Help me.
Below is a small excerpt of the code:
let queue = DispatchQueue(label: "AssetWriterQueue")
let audioProvider = SampleProvider(buffers: audioBuffers)
let videoProvider = SampleProvider(buffers: videoBuffers)
let audioInput = createAudioInput(audioBuffers: audioBuffers)
let videoInput = createVideoInput(videoBuffers: videoBuffers)
let adaptor = createPixelBufferAdaptor(videoInput: videoInput)
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
await withCheckedContinuation { continuation in
videoInput.requestMediaDataWhenReady(on: queue) {
let time = videoProvider.getPresentationTime()
if let buffer = videoProvider.getNextBuffer() {
adaptor.append(buffer, withPresentationTime: time)
} else {
videoInput.markAsFinished()
continuation.resume()
}
}
}
await withCheckedContinuation { continuation in
audioInput.requestMediaDataWhenReady(on: queue) {
if let buffer = audioProvider.getNextBuffer() {
audioInput.append(buffer)
} else {
audioInput.markAsFinished()
continuation.resume()
}
}
}
Trying to use the Apple Music API for mirroring playlists from spotify. Wondering if it will ever be on the roadmap to update/reorder playlists, or delete playlists/remove items using the API?
Is this something that can't be handled or is it a business decision? I see that there is more flexibility with the MusicKit. Can we expect something like this in the future for the API?
Thanks,
Since upgrade to iOS17 WebRTC playback have problems on going fullscreen - video element is rapidly changing its dimensions while taking full screen size and animation seems very glitchy.
I'm observing this issue on every webrtc players available, so I think the problem is in the mobile safari.
Is there any way to prevent resizing of video on fullscreen?
We're experimenting with a stream that has a large (10 minutes) clear portion in front of the protected section w/Fairplay.
We're noticing that AVPlayer/Safari trigger calls to fetch the license key even while it's playing the clear part, and once we provide the key, playback fails with:
name = AVPlayerItemFailedToPlayToEndTimeNotification, object = Optional(<AVPlayerItem: 0x281ff2800> I/NMU [No ID]), userInfo = Optional([AnyHashable("AVPlayerItemFailedToPlayToEndTimeErrorKey"): Error Domain=CoreMediaErrorDomain Code=-12894 "(null)"])
- name : "AVPlayerItemFailedToPlayToEndTimeNotification"
- object : <AVPlayerItem: 0x281ff2800> I/NMU [No ID]
▿ userInfo : 1 element
▿ 0 : 2 elements
▿ key : AnyHashable("AVPlayerItemFailedToPlayToEndTimeErrorKey")
- value : "AVPlayerItemFailedToPlayToEndTimeErrorKey"
- value : Error Domain=CoreMediaErrorDomain Code=-12894 "(null)"
It seems like AVPlayer is trying to decrypt the clear portion of the stream...and I'm wondering if it's because we've set up our manifest incorrectly.
Here it is:
#EXTM3U
#EXT-X-VERSION:8
#EXT-X-TARGETDURATION:20
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MAP:URI="clear-asset.mp4",BYTERANGE="885@0"
#EXT-X-DEFINE:NAME="path0",VALUE="clear-asset.mp4"
#EXTINF:9.98458,
#EXT-X-BYTERANGE:81088@885
{$path0}
#EXTINF:19.96916,
#EXT-X-BYTERANGE:159892@81973
{$path0}
#EXTINF:19.96916,
#EXT-X-BYTERANGE:160245@241865
{$path0}
#EXT-X-DISCONTINUITY
#EXT-X-MAP:URI="secure-asset.mp4",BYTERANGE="788@0"
#EXT-X-DEFINE:NAME="path1",VALUE="secure-asset.mp4"
#EXT-X-KEY:METHOD=SAMPLE-AES,URI="skd://guid",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1"
#EXTINF:19.96916,
#EXT-X-BYTERANGE:159928@5196150
{$path1}
#EXT-X-ENDLIST
Our DJ application Mixxx renders scrolling waveforms with 60 Hz. This looks perfectly smooth on an older 2015 MacBook Pro. However it looks jittery on a new M1 device with "ProMotion" enabled. Selecting 60 Hz fixes the issue.
We are looking for a way to tell macOS that it can expect 60 Hz renderings from Mixxx and must not display them early (at 120 Hz) even if the pictures are ready.
The alternative would be to read out the display settings and ask the user to select 60 Hz.
Is there an API to:
hint the display diver that we render with 60 Hz
read out the refresh rate settings?
Hello everyone,
I'm currently facing a challenging issue with my macOS application that involves HEIF image processing. The application uses an OperationQueue to handle HEIF compression tasks. However, I've observed a significant delay in processing when a screen recording is active. This delay doesn't occur under normal circumstances.
Here's a brief overview of the implementation:
The HEIF processing task is encapsulated within an Operation added to an OperationQueue.
The task involves using CIContext for image processing.
When screen recording is initiated, the operation's execution becomes unusually slow or gets delayed extensively.
After some research and community feedback, I learned that screen recording might be affecting the system's resource allocation, particularly impacting tasks that utilize GPU resources, like CIContext operations in my case.
To address this, I tried the following:
Switching to a custom dispatch queue with a .userInitiated QoS.
Using GCD instead of OperationQueue.
Despite these attempts, the issue persists during screen recording. It seems like the screen recording process is given higher priority by macOS, leading to resource reallocation and thus affecting my application's performance.
I'm looking for insights or suggestions on how to handle this scenario more effectively. Specifically, I am interested in:
Understanding how screen recording impacts resource allocation in macOS.
Exploring ways to ensure that my HEIF processing task is not severely impacted by other system processes like screen recording.
Any best practices or alternative approaches for handling image processing tasks that are sensitive to system resource availability.
Here's a snippet of the HEIF processing function for reference:
import CoreImage
struct CommandResult: CustomStringConvertible {
let output: String
let error: Process.TerminationReason
let status: Int32
var description: String {
return "error:\(error.rawValue), output:\(output), status:\(status)"
}
}
func heif(at sourceURL: URL, to destinationURL: URL, as quality: Int = 75) -> CommandResult {
let compressionQuality = CGFloat(quality) / 100.0
guard let ciImage = CIImage(contentsOf: sourceURL) else {
return CommandResult(output: "Load heic image failed \(sourceURL)", error: .exit, status: -1)
}
let context = CIContext(options: nil)
let heifOptions = [kCGImageDestinationLossyCompressionQuality: compressionQuality] as! [CIImageRepresentationOption: Any]
do {
try context.writeHEIFRepresentation(of: ciImage,
to: destinationURL,
format: .RGBA8,
colorSpace: ciImage.colorSpace!,
options: heifOptions)
} catch {
return CommandResult(output: "Compress and write heic image failed \(sourceURL)", error: .exit, status: -1)
}
return CommandResult(output: "Compress and write heic image successfully \(sourceURL)", error: .exit, status: 0)
}
Thank you for your time and any assistance you can provide!
var config = PHPickerConfiguration()
config.filter = PHPickerFilter.images
I want only 'png' files to be displayed when the PHPickerViewController photo list is opened.
I've read this post : https://developer.apple.com/forums/thread/687415
In this post, it is mentioned that filtering image formats by PHPickerConfiguration is not possible (2 years ago).
Is it still not possible? Has issue 71832162 not been resolved?
When developing a custom camera for iOS, when the sessionPreset of AVCaptureSession is set to AVCaptureSessionPresetPhoto, photos cannot be taken on the iPhone 15 Pro Max, but other devices are normal.SessionPreset settings and other enumerations can be shot normally. Please help Apple developers to determine the cause.
In addition, I initially thought there was a problem with our code writing, but when I looked at some demos written by others, the same problem would occur when using the AVCaptureSessionPresetPhoto enumeration and running it on iPhone15 pro max.
I've started seeing several users getting an app crash that I've been unable to find the root cause for so far. I've tried running the app in release build with address sanitizer and zombie objects checks enabled but have been unable to reproduce it. It only occurs for iOS 17 users. Any ideas on how I can troubleshoot this?
Crashed: com.apple.main-thread
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000
Crashed: com.apple.main-thread
0 libsystem_platform.dylib 0xed4 _platform_memmove + 52
1 QuartzCore 0x137120 CA::Render::InterpolatedFunction::encode(CA::Render::Encoder*) const + 248
2 QuartzCore 0x136f40 CA::Render::GradientLayer::encode(CA::Render::Encoder*) const + 44
3 QuartzCore 0x2e384 CA::Render::Layer::encode(CA::Render::Encoder*) const + 284
4 QuartzCore 0x2e224 CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) + 196
5 QuartzCore 0x2b654 invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 244
6 QuartzCore 0x2b4fc CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 368
7 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
8 QuartzCore 0x2b4bc CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 304
9 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
10 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
11 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
12 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
13 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
14 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
15 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
16 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
17 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
18 QuartzCore 0x2b488 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
19 QuartzCore 0x6fc60 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 11192
20 QuartzCore 0x66574 CA::Transaction::commit() + 648
21 UIKitCore 0x31b5ec __34-[UIApplication _firstCommitBlock]_block_invoke_2 + 36
22 CoreFoundation 0x373a8 __CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 28
23 CoreFoundation 0x35b9c __CFRunLoopDoBlocks + 356
24 CoreFoundation 0x33a9c __CFRunLoopRun + 848
25 CoreFoundation 0x33668 CFRunLoopRunSpecific + 608
26 GraphicsServices 0x35ec GSEventRunModal + 164
27 UIKitCore 0x22c2b4 -[UIApplication _run] + 888
28 UIKitCore 0x22b8f0 UIApplicationMain + 340
29 Coach 0x799d8 main + 14 (main.m:14)
30 ??? 0x1abefadcc (Missing)
I’m working with the Spatial Video related APIs in AVFoundation, and while I can create an AVAssetReader that reads an AVAssetTrack that reports a .containsStereoMultiviewVideo media characteristic (on a spatial video recorded by an iPhone 15 Pro), the documentation doesn’t make it clear how I can obtain the secondary video frame from that track.
Does anyone know where to look? I've scoured the forums, documentation, and other resources, and I've had no luck.
Thanks!
Is there a way to play a specific rectangular region of interest of a video in an arbitrarily-sized view?
Let's say I have a 1080p video but I'm only interested in a sub-region of the full frame. Is there a way to specify a source rect to be displayed in an arbitrary view (SwiftUI view, ideally), and have it play that in real time, without having to pre-render the cropped region?
Update: I may have found a solution here: img DOT ly/blog/trim-and-crop-video-in-swift/ (Apple won't allow that URL for some dumb reason)
I tried running this demo app in "Designed for iPad" mode on my M3 MacBook Pro, and it crashes with the following errors:
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
CMIO_DAL_PlugInManagement.cpp:917:CreatePlugIn Could not find plugin with kCMIOHardwarePlugInTypeID
CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead
CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead
[C:1-3] Error received: Invalidated by remote connection.
CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode
CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed
Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770)
CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode
CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed
Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770)
Using capture device: FaceTime HD Camera
Camera access not determined.
Unknown client: Capturing Photos
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
AX Safe category class 'SFUnifiedBarRegistrationAccessibility' was not found!
Photo library access not determined.
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
fopen failed for data file: errno = 2 (No such file or directory)
Errors found! Invalidating cache...
fopen failed for data file: errno = 2 (No such file or directory)
Errors found! Invalidating cache...
CMIOHardware.cpp:1388:CMIOStreamRegisterAsyncStillCaptureCallback stream doesn't support async still capture
CMIOHardware.cpp:1412:CMIOStreamRegisterAsyncStillCaptureCallback Error: 1970171760, failed
<<<< CMIOFigCaptureStream >>>> Fig assert: "! stream->streaming" at bail (CMIOFigCaptureStream.m:1173) - (err=0)
-[MTLDebugDevice newTextureWithDescriptor:iosurface:plane:]:2641: failed assertion `Texture Descriptor Validation
IOSurface textures must use MTLStorageModeShared
libsystem_kernel.dylib`:
0x188f2e0d4 <+0>: mov x16, #0x148
0x188f2e0d8 <+4>: svc #0x80
-> 0x188f2e0dc <+8>: b.lo 0x188f2e0fc ; <+40> Thread 19: signal SIGABRT
0x188f2e0e0 <+12>: pacibsp
0x188f2e0e4 <+16>: stp x29, x30, [sp, #-0x10]!
0x188f2e0e8 <+20>: mov x29, sp
0x188f2e0ec <+24>: bl 0x188f26230 ; cerror_nocancel
0x188f2e0f0 <+28>: mov sp, x29
0x188f2e0f4 <+32>: ldp x29, x30, [sp], #0x10
0x188f2e0f8 <+36>: retab
0x188f2e0fc <+40>: ret
Greetings Fellow Humans,
My player uses the v3 musickit-js library. I am trying to handle situations where a user tries to play explicit content in my player with an account that has content restrictions enabled. I don't see a mechanism to know if the toggle is set in the account. The only mechanism I see is to respond to a CONTENT_RESTRICTED error as handled by the callback to the function I provide as a callback to the mediaPlaybackError event.
I have attached many callbacks (like bufferedProgressDidChange) and those all work, but this one never fires.
music.addEventListener("mediaPlaybackError", onPlaybackError);
Or
music.addEventListener(MusicKit.Events.mediaPlaybackError, onPlaybackError);
My onPlaybackError function, at least for debugging purposes, is:
function onPlaybackError(e) {
console.log("onPlaybackError");
console.log(e);
}
There are so many error conditions that are meant to be handled in this way but the callback never happens. Am I missing something? Why doesn't this callback fire?
Thanks!
I am working on an app that uses Core Audio through JUCE library for audio. The problem I'm trying to solve is that when the app is using a full duplex audio interface such as one from Focusrite Scarlett series for output, the app shows a dialog requesting permission to use microphone.
The root cause of the issue is that by default, Core Audio opens full duplex devices for both input and output. On previous macOS versions, I was able to work around the problem by disabling the input stream before starting the IOProc by setting AudioHardwareIOProcStreamUsage to all zero for input. On macOS Sonoma this disables input so that the microphone indicator is not shown, but the permission popup is still shown. What other reasons there are to show the popup?
I have noticed that Chrome and Slack have the same problem that they show the microphone popup when trying to play sounds on the Focusrite, but for example Deezer manages without the popup.
I am detecting problems with the volume level with the Bluetooth connection after the iOS 17.2 update. Before this problem persisted on the iPhone 11 and the iPhone 15 Pro, after the 17.2 update it seems that the problem was fixed on the iPhone 11 but still It persists on the iPhone 15 Pro. I have never had problems with the volume level in my car, but something Apple has changed that continues to affect it. How can it be corrected? Thank you very much for your support. I did a test with the same song and the same volume level (maximum volume on the smartphone and volume 12 on my Suzuki Swift) and these were the decibels results obtained. The Iphone 11 and 15 Pro has updated to iOS 17.2
Can we confirm that as of iOS 16.3.1, key frames for MPEGTS via HLS are mandatory now?
I've been trying to figure out why https://chaney-field3.click2stream.com/ shows "Playback Error" across Safari, Chrome, Firefox, etc.. I ran the diagnostics against one of the m3u8 files that is generated via Developer Tools (e.g. mediastreamvalidator "https://e1-na7.angelcam.com/cameras/102610/streams/hls/playlist.m3u8?token=" and then hlsreport validation_data.json) and see this particular error:
Video segments MUST start with an IDR frame
Variant #1, IDR missing on 3 of 3
Does Safari and iOS devices explicitly block playback when it doesn't find one? From what I understand AngelCam simply acts as a passthrough for the video/audio packets and does no transcoding but converts the RTSP packets into HLS for web browsers But IP cameras are constantly streaming their data and a user connecting to the site may be receiving the video between key frames, so it would likely violate this expectation.
From my investigation it also seems like this problem also started happening in iOS 16.3? I'm seeing similar reports for other IP cameras here:
https://ipcamtalk.com/threads/blue-iris-ui3.23528/page-194#post-754082
https://www.reddit.com/r/BlueIris/comments/1255d78/ios_164_breaks_ui3_video_decode/
For what it's worth, when I re-encoded the MPEG ts files (e.g. ffmpeg-i /tmp/streaming-master-m4-na3.bad/segment-375.ts -c:v h264 /tmp/segment-375.ts) it strips the non key frames in the beginning and then playback works properly if I host the same images on a static site and have the iOS device connect to it.
It seems like Chrome, Firefox, VLC, and ffmpeg are much more forgiving on missing key frames. I'm wondering what the reason for enforcing this requirement? And can I confirm it's been a recent change?
Hi Apple Team, we are observing following error intermittently when trying to playback FairPlay protected HLS streams. The error happens immediately after loading the certificate.
Playback with same certificate on same device(Mac, iPhone) works most of time but intermittently this error is observed with following codes. The code=6 means MEDIA_KEYERR_DOMAIN but I did not find any information on what does systemCode=4294955417 mean? Is there a way to check what does this system code mean and what could be causing this intermittent behaviour?
{
"code": 6,
"systemCode": 4294955417
}