Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Why doesn’t AVPlayer / AVFoundation support MPEG-DASH (MPD)? Any public rationale?
Hi, I understand that AVPlayer/AVFoundation doesn’t natively play MPEG-DASH manifests (.mpd) today, while HLS is supported and widely documented by Apple. I’m not asking for roadmap commitments, but I’d like to understand whether there is any publicly documented rationale for not supporting DASH/MPD in AVFoundation (e.g., technical constraints, platform integration, DRM ecosystem, power/performance considerations, etc.). Questions: Is there any Apple statement / documentation explaining why DASH (MPD) isn’t supported in AVFoundation? Is Apple’s recommended approach still “provide HLS for Apple clients” (potentially sharing CMAF segments and generating separate manifests)? If there’s no public rationale, is filing Feedback Assistant the best channel for requesting MPD playback support? Thanks!
1
0
165
1d
Logic Pro - discover channel upstream latency
Hello everyone, I've written an audio unit plugin that needs to be aware of any upstream latency caused by heavy plugins before it on the channel. Is there any way to query this? I know that Logic applies PDC at the channel's output (summing point), but I need to know what the accumulated latency is at the point the audio enters my plugin. Thanks!
0
0
153
1d
It crashes when AVAssetReader is released
Thread 5 Crashed: 0 libobjc.A.dylib 0x19af7b038 objc_msgSend + 56 1 CoreFoundation 0x19dfdb618 cow_cleanup + 135 2 CoreFoundation 0x19dfdb6fc -[__NSDictionaryM dealloc] + 147 3 MediaToolbox 0x1b167636c FigRemotePropertyCacheTeardown + 31 4 MediaToolbox 0x1b1c5b648 remoteXPCAsset_Finalize + 107 5 CoreMedia 0x1b1e9166c FigBaseObjectFinalize + 275 6 CoreFoundation 0x19dfcc5ec _CFRelease + 295 7 AVFCore 0x1b1054d64 -[AVFigAssetTrackInspector dealloc] + 151 8 AVFCore 0x1b0f818d8 -[AVAssetTrack dealloc] + 63 9 CoreFoundation 0x19dfdba28 RELEASE_OBJECTS_IN_THE_ARRAY + 115 10 CoreFoundation 0x19dfdb7e0 -[__NSArrayM dealloc] + 147 11 AVFCore 0x1b0f52e04 -[AVURLAsset dealloc] + 167 12 libobjc.A.dylib 0x19af821f8 object_cxxDestructFromClass(objc_object*, objc_class*) + 115 13 libobjc.A.dylib 0x19af7df20 objc_destructInstance_nonnull_realized(objc_object*) + 75 14 libobjc.A.dylib 0x19af7d4a4 _objc_rootDealloc + 71 15 AVFCore 0x1b0fef988 -[AVAssetReaderOutput dealloc] + 415 16 AVFCore 0x1b0ff11ec -[AVAssetReaderTrackOutput dealloc] + 127 17 CoreFoundation 0x19dfe20a4 -[__NSSingleObjectArrayI dealloc] + 63 18 libobjc.A.dylib 0x19af7d3f8 AutoreleasePoolPage::releaseUntil(objc_object**) + 203
1
0
160
1d
AppleAVBAudio assertion information
Hi, I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations. Is there any way to have more information about these assertions and why they are happening ? Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy Here are the logs shown in the console: Filtering the log data using "process == "coreaudiod"" Timestamp Thread Type Activity PID TTL 2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
1
0
63
1d
VisionKit - crash after photo taken in VNDocumentCameraViewController in iOS 26 when Liquid Glass enabled
I'm adopting Liquid Glass in iOS 26, when I try to test VNDocumentCameraViewController with document scanning after Liquid Glass enabled, there's a crash just after a photo is taken in VNDocumentCameraViewController, here's the screenshot when it crashed The exception output in XCode console is this: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Layout requested for visible navigation bar, <UINavigationBar: 0x1240bde00; frame = (0 117; 390 54); opaque = NO; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c21e60>> standardAppearance=0x12407b900 scrollEdgeAppearance=0x12407bb80 compactAppearance=0x12407b880 no-scroll-edge-support, when the top item belongs to a different navigation bar. topItem = <UINavigationItem: 0x1240bd800> style=navigator leftBarButtonItems=0x123d4e5f0 rightBarButtonItems=0x123d4d5a0, navigation bar = <UINavigationBar: 0x107b9ad00; frame = (0 47; 390 54); opaque = NO; autoresize = W; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c20150>> delegate=0x10a805200 standardAppearance=0x107b2c300 scrollEdgeAppearance=0x107b2c280 compactAppearance=0x107b2c100, possibly from a client attempt to nest wrapped navigation controllers.' *** First throw call stack: (0x18e1db994 0x18b0f5814 0x18c092aa0 0x193b18660 0x193a7d540 0x193a7e020 0x1953ec4a0 0x1943b7d78 0x18ed83420 0x18ed82f74 0x18eb83134 0x18eb44c10 0x18eb70bc4 0x18eb7e74c 0x193ac8cd0 0x193ac8c04 0x193ad6afc 0x193ad5f8c 0x27b456560 0x18e12c4cc 0x18e15c0b0 0x18e15bfd8 0x18e133c1c 0x18e132a6c 0x22ed54498 0x193af6ba4 0x193a9fa78 0x193bcb68c 0x102cc2718 0x102cc2688 0x102cc2794 0x18b14ae28) libc++abi: terminating due to uncaught exception of type NSException
1
1
308
1d
is the output frame rate of a CMIOExtension rounded or capped?
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output. To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000) I've held off on the creation of the 59.94/60fps source for now until this problem is resolved. my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find (lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration ▿ Optional<CMTime> ▿ some : CMTime - value : 1000000 - timescale : 30000000 ▿ flags : CMTimeFlags - rawValue : 1 - epoch : 0 I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice. Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates? I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.
0
0
102
1d
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'], }); await AudioSession.startAudioSession(); ❓ My questions: Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max? Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices? Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)? Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
1
0
52
1d
DELETE/PUT in AppleMusic API
Hi,Since today, we are no more able to do DELETE/PUT request on the Apple Music API.So, we can't update a playlist details, delete a playlist, delete tracks in playlist, delete tracks in library...Old methods allowed are now returning only an HTTP Code 403.Why this change in the Apple Music API ? We can hope that will be back soon ?
20
2
9.9k
2d
Unique identifier of a MIDI device
Hello, I need to know what is a unique identifier of a MIDI device (source/destination). Important note: I want to get the same ID when a device is reconnected (unplugged and then plugged again). The main candidate is kMIDIPropertyUniqueID property. But I don't know if it meets the requirement above or not. Additional question: is it always available for any endpoint? Also there is kMIDIPropertyDeviceID property. What about it? And one more option is just MIDIEndpointRef returned by MIDIGetSource or MIDIGetDestination. So what is the proper way to get ID which persists between device reconnections?
0
0
16
2d
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
3
0
497
2d
Changing Frame Rate of External Display on iPad
Hello, As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps. I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps. I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes: ‘Support for HDMI Monitoring at Sensor Rate and Resolution’ This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented? Thanks!
6
0
759
2d
Inquiries about AVPlayer's ABR switching logic and control APIs for HLS/LL-HLS
Hello, I am developing a custom player SDK using AVPlayer to support HLS and LL-HLS live streaming. I have some questions about the internal logic of AVPlayer regarding ABR, as this information is not explicitly covered in the documentation. ABR Switching Logic: Does AVPlayer trigger bitrate switching primarily based on stall occurrences (buffer starvation)? I am curious if the switching logic is reactive to stalls or if it proactively switches to prevent them based on throughput estimation. Developer Controls for ABR: To influence or control the ABR selection, are preferredPeakBitRate and preferredForwardBufferDuration the only properties available to developers? Are there any other recommended APIs to assist with ABR decisions? Thank you for your help.
2
0
210
2d
PDFKit doesn't return the correct page
Hello, We are experiencing on some occasions a wrong behavior with PDFDocument method: func page(at index: Int) -> PDFPage? With certain PDF files, this method returns the wrong PDFPage. This occurs on iOS 18.3, 18.5 and 18.6.2 (an maybe on other versions). Try this PDF for instance (page 81 is returned when index = 2): https://drive.google.com/open?id=1MHm2wjfsbWB8OiRmARUMmvODYxp4DIqP&usp=drive_fs Also, I mention that this doesn't occur systematically with this PDF. When making a copy of this file we don't observe the issue. Could this be linked some kind of internal cache issue ?
1
0
263
3d
ProRAW: Demystify 48MP vs 12MP binning based on lighting?
Hi everyone, does anybody have any resources I could check out regarding the 48->12mp binning behavior on supported sensors? I know the 48mp sensor on iPhone can automatically bin pixels for better low light performance. But not sure how to reliably make this happen in practice. On iPhone 14 Pro+ with a 48MP sensor, I want the best of both worlds for ProRAW: ∙ Bright light: 48MP full resolution ∙ Low light: 12MP pixel-binned for better noise `photoOutput.maxPhotoDimensions = CMVideoDimensions(width: 8064, height: 6048) let settings = AVCapturePhotoSettings(rawPixelFormatType: proRawFormat, processedFormat: [...]) settings.photoQualityPrioritization = .quality // NOT setting settings.maxPhotoDimensions — always get 12MP` When I omit maxPhotoDimensions, iOS always returns 12MP regardless of lighting. When I set it to 48MP, I always get 48MP. Is there an API to let iOS automatically choose the optimal resolution based on conditions, or should I detect low light myself (via device.iso / exposureDuration) and set maxPhotoDimensions accordingly? Any help or direction would be much appreciated!
0
0
198
3d
AVCaptureSession preview briefly goes blank after interruption (lock/unlock or camera switch) while isRunning == true
Environment Device: iPhone 15 Pro iOS: iOS 18.0 Framework: AVFoundation App type: Custom camera app using AVCaptureSession + AVCaptureVideoPreviewLayer I’m seeing an intermittent but frequent issue where the camera preview layer briefly flashes empty after certain interruptions, even though the capture session reports itself as running and no errors are emitted. This happens most often after: Locking and unlocking the device Switching cameras (back ↔ front) The issue is not 100% reproducible, but occurs often enough to be noticeable in normal usage. What happens The preview layer briefly flashes as empty (sometimes just a “micro-frame”) Duration: typically ~0.5–2 seconds before frames resume session.isRunning == true throughout No crash, no runtime error, no interruption end failure Focus/exposure restore correctly once frames resume Visually it looks like the preview layer loses frames temporarily, even though the session appears healthy. Repro Intermittent but frequent after: Lock → unlock device Switching camera (front/back) Timing-dependent and non-deterministic Happens multiple times per session, but not every time Key observation AVCaptureSession.isRunning == true does not guarantee that frames are actually flowing. To verify this, I added an AVCaptureVideoDataOutput temporarily: During the blank period, no sample buffers are delivered Frames resume after ~1–2s without any explicit restart Session state remains “running” the entire time What I’ve tried (did NOT fix it) Adding delays before/after startRunning() (0.1–0.5s) Calling startRunning() on different queues Restarting the session in AVCaptureSessionInterruptionEnded Verifying session.connections (all show isActive == true) Rebuilding inputs/outputs during interruption recovery Ensuring startRunning() is never called between beginConfiguration() / commitConfiguration() (Hit the expected runtime warning when attempted) None of the above removed the brief blank preview. Workaround (works visually but expensive) This visually fixes the issue, but: Energy impact jumps from Low → High in Xcode Energy Gauge AVCaptureVideoDataOutput processes 30–60 FPS continuously The gap only lasts ~1–2s, but toggling the delegate on/off cleanly is difficult Overall CPU and energy cost is not acceptable for production Additional notes CPU usage is already relatively high even without the workaround (this app is camera-heavy by nature) With the workaround enabled, energy impact becomes noticeably worse The issue feels like a timing/state desync between session state and actual frame delivery, not a UI issue Questions Is this a known behavior where AVCaptureSession.isRunning == true but frames are temporarily unavailable after interruptions? Is there a recommended way to detect actual frame flow resumption (not just session state)? Should the AVCaptureVideoPreviewLayer.connection (isActive / isEnabled) be explicitly checked or reset after interruptions? Is there a lightweight, energy-efficient way to bridge this short “no frames” gap without using AVCaptureVideoDataOutput? Is rebuilding the entire session the only reliable solution here, or is there a better pattern Apple recommends?
0
0
299
5d
AVCaptureDevice.isFaceDrivenAutoFocusEnabled not working with continuousAutoFocus
Hi everyone, I'm developing a camera application that requires precise, predictable control over the focus system. I'm encountering unexpected behavior with face-driven autofocus in continuous autofocus mode. Issue: When using AVCaptureDevice.FocusMode.continuousAutoFocus, the system continues to prioritize faces for focus even after attempting to disable face-driven autofocus with: device.automaticallyAdjustsFaceDrivenAutoFocusEnabled = false device.isFaceDrivenAutoFocusEnabled = false Observations: The behavior is inconsistent across different scenes. In well-lit/properly exposed scenes: focus persistently locks onto faces, ignoring my configuration. . In underexposed scenes: the intended focus behavior is more consistently respected.
0
0
254
5d
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
3
2
507
5d
SpeechAnalyzer error "asset not found after attempted download" for certain languages
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages. When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error: Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download." The ".ar" appears to be the language code, which in this case was Arabic. When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here. For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around. Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself. private func setUpAnalyzer() async throws { guard let sourceLanguage else { throw Error.languageNotSpecified } guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else { throw Error.unsupportedLanguage } let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription) self.transcriber = transcriber let reservedLocales = await AssetInventory.reservedLocales if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales { if let oldest = reservedLocales.last { await AssetInventory.release(reservedLocale: oldest) } } do { let status = await AssetInventory.status(forModules: [transcriber]) print("status: \(status)") if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) { try await installationRequest.downloadAndInstall() } } ...
8
0
818
5d
Live Streaming issue for the RTSP
We have the application 'ADS Smart', a companion application for our ADS Dashcam. We offer a feature that lets users stream the live footage of the dashcam cameras through the app. Currently, we are experiencing a time delay of 30+ seconds to see the live stream, i.e the first frame of the live footage is taking around 30+ seconds to display in the app. We are using the MobileVLCKit library to stream the videos in the app. The current flow of the code, Flutter triggers the native playback via a method channel The Dart side calls the iOS method channel <identifier_name>/ts_player with method playTSFromURL passing: url(e.g rtsp://.... for live), playerId viewId (stable ID used to host native UI) showControls optional localIp AppDelegate receives the call and prepares networking Entry point: AppDelegate.tsChannel handler for "playTSFromURL" in AppDelegate.swift. It resolves the Wi‑Fi interface and local IP if possible: Sets VLC_SOURCE_ADDRESS to the Wi‑Fi IP (when available) to prefer Wi‑Fi for the stream. Uses NWPathMonitor and direct interface inspection to find the Wi‑Fi interface (e.g., en0) and IP. Kicks off best-effort route priming to the dashcam IP/ports (non-blocking), see establishWiFiRoutePriority. AppDelegate chooses the right player implementation createPlayerForURL(_: ) decides: RTSP(rasp://..) --> use VLCKit-backed player (class TSStreamPlayer -> TSStreamPlayer class provides a VLC-backed video player for iOS, handling playback of Transport Stream(TS) URLs with strict main-thread UI updates, view safety, and stream management, using MobileVLCKit) .ts files --> use VLCKit-based player for playing already recorded videos in the app. If the selected player supports extras (e.g. TSStreamPlayerExtras), it sets LocalIP (if resolved) Wi-fi interface name AppDeletegate creates the native 'platform view' container and overlay platformView(for:parent:showControls:): Creates a container UIView attached to the Flutter root view Adds a dedicated child videoHost[viewId]-the host UIView for rendering video. If showControls == true, adds TSPlayerControlsOverlay over the video and wires overlay callbacks back to the Flutter via controlChannel (/player_controls) If showControls == false, adds a minimal back button and wires it to onGalleryBack. The player starts playback inside the host view Class player.playTSFromURL(urlString, in:host){ success, error in...} on the main thread. For RTSP/TS streams: this is handled by TSStreamPlayer(VLCKit). Success/failure is reported back to Flutter The completion closure invoked in step 5 returns true on first real playback or an error message on failure. The method channel result responds: true --> Flutter knows playback started FlutterError -> Flutter can show an error Stopping and cleanup "stop" on tsChannel stops and disposes the player(s). "removePlatformView" removes the overlay, back button, the host, and the container, and disposes any remaining players. I am attaching the logs of the app while running. The actual issue happening is that when the iOS device is connected to the dashcam's Wi-Fi, for the app's live streaming-related information, the iOS is using Mobile Data even though the wifi is the main communication channel. The iOS device takes approximately 30 seconds to display the first frame of live footage in the app. Despite being connected to the dashcam’s Wi-Fi, the iOS device sets the value of ES (en0) to Wi-Fi after multiple attempts, causing the live footage to appear in the app after this delay. So, how can we set up the configuration to display the live footage from the dashcam cameras within just 2 to 3 seconds in the iOS device? ios.txt
0
0
158
6d
iOS Speech Error on Mobile Simulator (Error fetching voices)
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function: import Foundation import AVFoundation class AudioManager { static let shared = AudioManager() var audioPlayer: AVAudioPlayer? var isPlaying: Bool { return audioPlayer?.isPlaying ?? false } var playbackPosition: TimeInterval = 0 func playSound(named name: String) { guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else { print("Sound file not found") return } do { if audioPlayer == nil || !isPlaying { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.currentTime = playbackPosition audioPlayer?.prepareToPlay() audioPlayer?.play() } else { print("Sound is already playing") } } catch { print("Error playing sound: \(error.localizedDescription)") } } func stopSound() { if let player = audioPlayer { playbackPosition = player.currentTime player.stop() } } func speak(text: String) { let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: text) utterance.voice = AVSpeechSynthesisVoice(language: "en-GB") synthesizer.speak(utterance) } } And my app shows text in a ScrollView: ScrollView { Text(self.description) .padding() .foregroundColor(.black) .font(.headline) .background(Color.gray.opacity(0)) }.onAppear { AudioManager.shared.speak(text: self.description) } However, the text doesn't get read out (in the simulator). I see some output in the console: Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices. I'm probably doing something wrong here, but not sure what.
5
1
627
1w