Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

Issues After Switching to v3
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward. I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case). I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox): Authorization failed: AUTHORIZATION_ERROR: Unauthorized MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13 authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 _next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28 media.mydomain.com:398:19 https://media.mydomain.com/:398
5
0
418
3w
Unkown photos were added to my iPhone Photo App.
There are several unknown pictures were added to my iPhone Photos. I'm developing a iOS app in swiftUI. And I'm sure that neither my developing app asked a permission to save photos to iOS device, nor these pictures are taken by myself, downloaded from the web, or synced from my iCloud. Why does this happen? Is my account or my iPhone attacked or hacked? btw, my device has turned the debug mode on, will this setting option would leads some vulnerability or some debug tools would add photos to my albums? The attached pictures are the unknown photos in my Photos App. I'v search this photo with google. And I found that many developers in stack overflow use this picture when they encounter troubles with image display or other manipulation. Where where are this photo come from? Why does it exist in my Photo albums?
1
0
315
3w
MacOS: AudioUnit packaged as .appex won't load when host app is sandboxed
Hi, I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality. For the next release of that app, we are planning to make two changes: make the app sandboxed package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception: AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863 Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct. It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice? "BuildMachineOSBuild" => "23H222" "CFBundleDisplayName" => "elgato_sample_recorder" "CFBundleExecutable" => "ElgatoSampleRecorder" "CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3" "CFBundleName" => "elgato_sample_recorder" "CFBundlePackageType" => "XPC!" "CFBundleShortVersionString" => "1.0.0.0" "CFBundleSignature" => "????" "CFBundleSupportedPlatforms" => [ 0 => "MacOSX" ] "CFBundleVersion" => "1.0.0.0" "DTCompiler" => "com.apple.compilers.llvm.clang.1_0" "DTPlatformBuild" => "24C94" "DTPlatformName" => "macosx" "DTPlatformVersion" => "15.2" "DTSDKBuild" => "24C94" "DTSDKName" => "macosx15.2" "DTXcode" => "1620" "DTXcodeBuild" => "16C5032a" "LSMinimumSystemVersion" => "10.13" "NSExtension" => { "NSExtensionAttributes" => { "AudioComponents" => [ 0 => { "description" => "Elgato Sample Recorder" "factoryFunction" => "elgato_sample_recorderAUFactoryAUv3" "manufacturer" => "Manu" "name" => "Elgato: Elgato Sample Recorder" "sandboxSafe" => 1 "subtype" => "Znyk" "tags" => [ 0 => "Effects" ] "type" => "aufx" "version" => 65536 } ] } "NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI" "NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3" } "NSHighResolutionCapable" => 1 } Any ideas what I am missing?
4
0
286
3w
Does PHAsset localIdentifier change after device transfer?
Hi, I’m working on a photo backup app. I track the PHAsset localIdentifier to determine which photos have been backed up and which haven’t. Recently, I’ve noticed that two users seem to have experienced the localIdentifier changes after transferring data to a new iPhone using Quick Start. Additionally, others on StackOverflow have mentioned that the localIdentifier sometimes changes after updating the iOS version. https://stackoverflow.com/questions/40094728/phobject-localidentifier-reliability I’d like to confirm the reliability of the localIdentifier after an iOS version upgrade or device transfer. Can I continue using these locally stored localIdentifiers? Or is there another recommended approach, such as using PHCloudIdentifier?
2
0
257
3w
Photo permission dialog not shown when iOS app runs on Mac
According to the docs: The first time your app performs an operation that requires [photo library] authorization, the system automatically and asynchronously prompts the user for it. (https://developer.apple.com/documentation/photokit/delivering-an-enhanced-privacy-experience-in-your-photos-app) I.e. it's not necessary for the app to call PHPhotoLibrary.requestAuthorization. This does seem to be what happens when my app runs on an iPhone or iPad; the prompt is shown. But when it runs on a Mac in "designed for iPad" mode, the permission dialog is not presented. Instead the code continues to see status == .notDetermined. That's today, on macOS 15.3. It may have worked in the past. Is anyone else seeing issues with this? Should I call requestAuthorization explicitly? (Would that actually work?)
0
0
255
3w
How to write RGB & Depth Frames Without Losing Synchronization
I’m currently working on a project where I capture both depth frames and RGB frames using AVCaptureDataOutputSynchronizer. Depth frames are stored as raw binary data and RGB frames are saved with AVAssetWriter. The issue I’m facing is that AVAssetWriter enforces a fixed framerate, meaning it adds or discards frames to maintain that rate (as I understand it). This causes a desynchronization between the depth and RGB frames, which is a problem because I need each depth frame to be exactly matched with the corresponding RGB frame as they were captured. How can I ensure that the RGB frames are saved without AVAssetWriter modifying the frame count?
0
0
259
3w
Music Keeps cutting off
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
1
0
234
3w
Build a vision capture system for apple vision pro
Hi, I am a newbie here. We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro. Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use. Thanks a lot for your suggestions! Charlie
0
0
200
3w
AVPlayer error: Too many open files
For some users in production, there's a high probability that after launching the App, using AVPlayer to play any local audio resources results in the following error. Restarting the App doesn't help. issue: [error: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x30311f270 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}} I've checked the code, and there aren't actually multiple AVPlayers playing simultaneously. What could be causing this?
0
0
272
4w
AVCaptureDevice rotationCoordinator modifying CALayer on switching devices
I am trying to use AVCaptureDevice.rotationCoordinator API to observe angles for preview and capture and it seems there is an issue with the API when used with arbitrary CALayer (which is not a AVCaptureVideoPreviewLayer) and switching cameras. Here is my setup. The below function is defined in an actor class called CameraManager that performs setup of rotationCoordinator. func updateRotationCoordinator(_ callback:@escaping @MainActor (CGFloat) -> Void) { guard let device = sessionConfiguration.activeVideoInput?.device, let displayLayer = displayLayer else { return } cancellables.removeAll() rotationCoordinator = AVCaptureDevice.RotationCoordinator(device: device, previewLayer: displayLayer) guard let coordinator = rotationCoordinator else { return } coordinator.publisher(for: \.videoRotationAngleForHorizonLevelPreview) .receive(on: DispatchQueue.main) .sink { degrees in let radians = degrees * .pi / 180 MainActor.assumeIsolated { callback(radians) } } .store(in: &cancellables) } This works the very first time but when I switch cameras and call this function again, it throws a runtime error that view's layer is modified from a non-main thread. This happens at the very line where rotation coordinator is been recreated. It's not clear why initialising rotation coordinator should modify CALayer properties right in it's init method. Modifying properties of a view's layer off the main thread is not allowed: view <MyApp.DisplayLayerView: 0x102ffaf40> with nearest ancestor view controller <_TtGC7SwiftUI19UIHostingControllerGVS_15ModifiedContentVS_7AnyViewVS_12RootModifier__: 0x101f7fb80>; backtrace: ( 0 UIKitCore 0x0000000194a977b4 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 22509492 1 UIKitCore 0x000000019358594c 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 416076 2 QuartzCore 0x00000001927f5bd8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43992 3 QuartzCore 0x00000001927f5a4c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43596 4 QuartzCore 0x000000019283a41c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 324636 5 QuartzCore 0x000000019283a0a8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 323752 6 AVFCapture 0x00000001af072a18 09192166-E0B6-346C-B1C2-7C95C3EFF7F7 + 420376 7 MyApp.debug.dylib 0x0000000105fa3914 $s10MyApp15CapturePipelineC25updateRotationCoordinatoryyy12CoreGraphics7CGFloatVScMYccF + 972 8 MyApp.debug.dylib 0x00000001063ade40 $s10MyApp11CameraModelC18switchVideoDevicesyyYaFTY3_ + 72 9 MyApp.debug.dylib 0x0000000105fe3cbd $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TQ1_ + 1 10 MyApp.debug.dylib 0x0000000105ff06d9 $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TATQ0_ + 1 11 MyApp.debug.dylib 0x0000000105f9c595 $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTQ0_ + 1 12 MyApp.debug.dylib 0x0000000105f9fb3d $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTATQ0_ + 1 13 libswift_Concurrency.dylib 0x000000019c49fe39 E15CC6EE-9354-3CE5-AF91-F641CA8283E0 + 433721 )
2
0
373
4w
Problem with UVC Device Access on visionOS
No external cameras show up in the app on visionOS. We use this sample code as a basis for our tests: https://developer.apple.com/documentation/visionos/displaying-video-from-connected-devices We also received the needed entitlement from Apple, but every camera we tried so far does not show up on visionOS. We tried the following devices and hubs: Insta360 X4 Somikon Endoscope Camera: USB HD Endoscope Camera EMEET Full HD Webcam - C960 BENFEI Video/Audio Capture Card, 4K HDMI auf USB C/A Logitech C920 HD PRO Webcam, Anker PowerConf C200 Insta360 GO 3S Anker 341 USB-C Hub UGREEN Revodok Pro 10Gbps USB-C Hub All Vision Pro devices we tried run with visionOS 2.3. When trying the same code on iPad we can actually use external cameras. Steps to reproduce: Start the app on a Vision Pro device and connect an external camera. The connected camera does not show up in the dropdown. Development environment: Xcode 16.2, macOS 15.3 Run-time configuration: iOS 18.3, visionOS 2.3
2
0
408
4w
Under certain conditions, using CallKit does not automatically enable the microphone.
Issue: Under certain conditions, using CallKit does not automatically enable the microphone. Steps to Reproduce: 1.Start an outgoing call, then the user manually mutes the audio. 2.Receive a native incoming call, end the current call, then answer the new incoming call.(This order is important.) 3.End the incoming call. 4.Start another outgoing call and observe the microphone; do not manually mute or unmute. Actual Behavior: The audio icon indicates that the audio is unmuted, but the microphone remains off, and the small yellow dot in the top status bar (which represents the microphone) does not appear. Expected Behavior: The microphone should be on, consistent with the audio icon display, and the small yellow dot should appear in the top status bar. Device: iPhone 16 pro & iPhone 15 pro, iOS 18.0+ Can it be reproduced using speakerbox(CallKit Demo)? YES
2
1
306
4w
How to Implement Screen Mirroring in iOS for Google TV?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device. What I Have Tried So Far I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings: Google Cast SDK: Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time. ReplayKit for Screen Capture: RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant. RTSP/UDP Streaming: Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult. My Questions: Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
0
1
282
Feb ’25
Attaching depth map manually
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process. I tried to: convert my depth map into pixel buffer create image destination ref and add the image there. add the auxData (depth map dict) This is the output: There is some black space there and my RGB image colour changes
1
0
268
Feb ’25
MPRemoteCommandCenter not updating play/pause button to proper state on iOS
So I'm using AVAudioEngine. When playing audio I become the 'now playing' app using MPNowPlayingInfoCenter/MPRemoteCommandCenter APIs. When configuring MPRemoteCommandCenter I add a play/pause command target via -addTargetWithHandler on the togglePlayPauseCommand property. Now I also have a play/pause button in my app's UI. When I pause playback from my app's UI (which means I'm the active app, I'm in the foreground), what I do is this: -I pause the AVAudioPlayerNode I'm using with AVAudioEngine. I do not, stop, reset, etc. the AVAudioEngine. I only pause the player node. My thought process here is that the user just pressed pause and it is very likely that he will hit 'play' to resume playback in the near future because My app is in the foreground and the user just hit the pause button. Now if my app moves to the background and if I receive a memory warning I presume it'd make sense to tear down the engine or pause it. Perhaps I'm wrong about this? So when I initially hit the play button from my app's UI I also activate my AVAudioSession. I do this in high priority NSOperation since the documentation warns that "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." So now I'm playing and I hit pause from my app's UI. Then I quickly bring up the "Now Playing" center and I see I'm the "Now Playing" app but the play-pause button is showing the pause icon instead of the play icon but I'm in the pause state. I do set MPNowPlayingInfoCenter's playbackState to MPNowPlayingPlaybackStatePaused when I pause. Not surprisingly this doesn't work. The documentation states this is for macOS only. So the only way to get MPRemoteCommandCenter to show the "play" image for the play-pause button is to deactivate my AVAudioSession when I pause playback? Since I change the active state of my audio session in a NSOperation because documentation recommends "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." the play-pause toggle in the remote command center won't immediately update since I'm doing it on another thread. IMO it feels kind of inappropriate for a play-pause button to wait on a NSOperation activating the audio session before updating its UI when I already know my play/paused state, it should update right away like the button in my app does. Wouldn't it be nicer to just use MPNowPlayingInfoCenter's playbackState property on iOS too? If I'm no the longer the now playing app/active audio session it doesn't matter since I'm not in the now playing UI, just ignore it? Also is it recommended that I deactivate my audio session explicitly every time the user pauses audio in my app (when I'm in the foreground)? Also when I do deactivate the audio session I get an error: AVAudioSessionErrorCodeIsBusy (but the button in the now playing center updates to the proper image). I do this : -(void)pause { [self.playerNode pause]; [self runOperationToDeactivateAudioSession]; // This does nothing on iOS: MPNowPlayingInfoCenter *nowPlayingCenter = [MPNowPlayingInfoCenter defaultCenter]; nowPlayingCenter.playbackState = MPNowPlayingPlaybackStatePaused; } So in -runOperationToDeactivateAudioSession I get the AVAudioSessionErrorCodeIsBusy. According to the documentation Starting in iOS 8, if the session has running I/Os at the time that deactivation is requested, the session will be deactivated, but the method will return NO and populate the NSError with the code property set to AVAudioSessionErrorCodeIsBusy to indicate the misuse of the API. So pausing the player node when pausing isn't enough to meet the deactivation criteria. I guess I have to pause or stop the audio engine. I could probably wait until I receive a scene went to background notification or something before deactivating my audio session (which is async, so the button may not update to the correct image in time). This seems like a lot of code to have to write to get a play-pause toggle to update, especially in iPad-multi window scene environment. What's the recommended approach? Should I pause the AudioEngine instead of the player node always? Should I always explicitly deactivate my audio session when the user pauses playback from my app's UI even if I'm in the foreground? I personally like the idea of just being able to set [MPNowPlayingInfoCenter defaultCenter].playbackState = MPNowPlayingPlaybackStatePaused; But maybe that's because that would just make things easier on me. This does feels overcomplicated though. If anyone can share some tips on how I should handle this, I'd appreciate it.
4
0
388
Feb ’25
Missing Depth Frames When Recording with AVCaptureVideoDataOutputSampleBufferDelegate/AVCaptureDataOutputSynchronizerDelegate and AVAssetWriter
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter. In my code, I save: Timestamps for each frame to a metadata file Depth frames to a binary file Video to an .mp4 file If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120. Does anyone know why this mismatch happens? func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { // Read all outputs guard let syncedDepthData: AVCaptureSynchronizedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { // only work on synced pairs return } if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped { return } let depthData = syncedDepthData.depthData let depthPixelBuffer = depthData.depthDataMap let sampleBuffer = syncedVideoData.sampleBuffer guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { return } addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer)) if !canWrite() { return } // Extract the presentation timestamp (PTS) from the sample buffer let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) //sessionAtSourceTime is the first buffer we will write to the file if self.sessionAtSourceTime == nil { //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time) guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return } self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp) } if self.videoWriterInput!.isReadyForMoreMediaData { self.videoWriterInput!.append(sampleBuffer) self.videoTimestamps.append( Timestamp( frame: videoTimestamps.count, value: timestamp.value, timescale: timestamp.timescale ) ) let ddm = depthData.depthDataMap depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp) } }
3
0
319
Feb ’25
SCStreamDelegetate stream:didStopWithError receiving null pointer for error
I have an SCStreamDelegate for capturing frames from applications. On recent point releases of macOS Sonoma, I've noticed that the stream is being cancelled with no user action being taken. I started trying to debug it and when my on error method is called, the error parameter being passed is null: func stream(_ stream: SCStream, didStopWithError error: Error) { /*debugger shows this and segfaults if I try to print "\(error)" error (Error) > error = (Builtin.RawPointer) 0x0 */ From what I can tell, error should be a valid NSError so I can check the error code, based on similar code I've seen in, for example OBS (https://github.com/obsproject/obs-studio/blob/265239d4174f8d291b0de437088c5b78f8e27687/plugins/mac-capture/mac-sck-common.m#L29) Usually when this happens, the menubar icon for screen sharing (where I would click to change sharing window, etc) stays there even after my app has closed an no apps are doing sharing stuff. Has anyone come across this before? Am I misinterpreting what the debugger is saying about the error parameter? I'm running macos 14.7.3, but I just updated from 14.7.2 earlier and had basically the same issue on both macos versions
1
0
280
Feb ’25
AVURLAsset with AVURLAssetHTTPCookiesKey - Cookies not persisting on retry requests
I'm experiencing an unexpected behavior with AVURLAsset and cookies. When setting cookies through AVURLAssetHTTPCookiesKey option, they seem to be sent only on the initial request but not on retry attempts. Here's my current implementation: let cookieProperties: [HTTPCookiePropertyKey: Any] = [ .name: "sessionCookie", .value: "testValue", .domain: url.host ?? "", .path: "/", .secure: true ] if let cookie = HTTPCookie(properties: cookieProperties) { let asset = AVURLAsset(url: url, options: [ AVURLAssetHTTPCookiesKey: [cookie], ]) } According to the documentation, AVURLAssetHTTPCookiesKey should apply the cookies to all requests made by this asset. However, when the initial request fails and AVPlayer retries, the cookies are not included in subsequent requests. Only when I store the cookie with HTTPCookieStorage.shared.setCookie, then it persists. Questions: Is this the expected behavior? If not, what could be causing the cookies to not persist for retry attempts? Is using HTTPCookieStorage.shared the recommended approach instead? Environment: iOS 16+ Using AVPlayer with AVURLAsset Streaming HLS content Any insights would be greatly appreciated.
0
0
224
Feb ’25