Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

MIDIFlushOutput does not cancel events scheduled over network
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send. e.g. func send(eventLists: [MIDIEventList]) { let outputPortRef = ... let networkDestination = ... for var eventList in eventLists { MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList) } } ... MIDIFlushOutput(networkDestination.objectRef) I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
1
2
197
3w
Metadata of Audiotracks Airplay-Receiver different from Source
Description: HLS-VOD-Stream contains several audio tracks, being marked with same language tag but different name tag. https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8 e.g. #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="bipbop_audio",LANGUAGE="eng",NAME="BipBop Audio 1",AUTOSELECT=YES,DEFAULT=YES #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="bipbop_audio",LANGUAGE="eng",NAME="BipBop Audio 2",AUTOSELECT=NO,DEFAULT=NO,URI="alternate_audio_aac/prog_index.m3u8" You set up Airplay from e.g. iPhone or Mac or iPad to Apple TV or Mac. Expected behavior: You see in AVPlayer and QuickTime Language Audiotrack Dropdown containing info about LANGUAGE and NAME on Airplay Sender as on Airplay Receiver - the User Interface between playing back a local Stream or Airplay-Stream is consistent. Current status: You see in UI of Player of Airplay Receiver only Information of Language Tag. Question: => Do you have an idea, if this is a missing feature of Airplay itself or a bug? Background: We'd like to offer additional Audiotrack with enhanced Audio-Characteristics for better understanding of spoken words - "Klare Sprache". Technically, "Klare Sprache" works by using an AI-based algorithm that separates speech from other audio elements in the broadcast. This algorithm enhances the clarity of the dialogue by amplifying the speech and diminishing the volume of background sounds like music or environmental noise. The technology was introduced by ARD and ZDF in Germany and is available on select programs, primarily via HD broadcasts and digital platforms like HbbTV. Users can enable this feature directly from their television's audio settings, where it may be labeled as "deu (qks)" or "Klare Sprache" depending on the device. The feature is available on a growing number of channels and is part of a broader effort to make television more accessible to viewers with hearing difficulties. It can be correctly signaled in HLS via: e.g. https://ccavmedia-amd.akamaized.net/test/bento4multicodec/airplay1.m3u8 # Audio #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="stereo-aac",LANGUAGE="de",NAME="Deutsch",DEFAULT=YES,AUTOSELECT=YES,CHANNELS="2",URI="ST.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="stereo-aac",LANGUAGE="de",NAME="Deutsch (Klare Sprache)",DEFAULT=NO,AUTOSELECT=YES,CHARACTERISTICS="public.accessibility.enhances-speech-intelligibility",CHANNELS="2",URI="KS.m3u8" Still there's the problem, that with Airplay-Stream you don't get this extra information but only LANGUAGE tag.
2
0
195
3w
io buffer sizes for audio driver based on IOUserAudioDevice
Dear Sirs, I've written an audio driver based on IOUserAudioDevice. In my IOOperationHandler I can receive and send the audio samples as expected. Is there any way to configure the number of samples transferred in each call? Currently it seem to be around 512 samples per call, which relates to 10.7 millisecs when operating on 48 kHz samplerate. I'd like to achieve something like 48 or 96 samples per call. I did some experiments and tried calls to SetOutputLatency() etc. but so far I didn't find the right way to change the in_io_buffer_frame_size in the callback. I'd like to do this as smaller buffer sizes would allow lower latencies for the subsequent audio processing. Thanks and best regards, Johannes
2
0
274
Jul ’24
What methods in what Framework to separate an audio file into two files?
I'm having trouble using SFSpeechRecognizer & SFSpeechRecognitionTask to show me the words from an audio file. I found a solution on stackoverflow to separate the audio file into smaller sizes. How would I do that programmatically using Swift for a macOS app Xcode project? I would prefer not to separate the file into smaller files. I will submit another post with more information for that.
3
0
218
2w
[iPadOS18 Beta3] on iPad 7th gen, camera app cannot detect QR code
Hi, After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no ​​longer detects QR codes. I tried updating to Beta7, but the issue remained. Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work. You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed). https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Has anyone else experienced this issue? I would like to know if this issue occurs on other iPad models as well. This is similar to the following issue that previously occurred with iPadOS 17.4. https://support.apple.com/en-lamr/118614 https://developer.apple.com/forums/thread/748092
2
0
295
2w
AudioDriverKit IOUserAudioIOOperationWriteEnd avoid for input loopback
Dear Sirs, I've written an audio driver based on AudioDriverKit. In my audio callback function I'm receiving calls with io operation IOUserAudioIOOperationWriteEnd and IOUserAudioIOOperationBeginRead as expected which means I see IOUserAudioIOOperationWriteEnd operations during a playback in an application like VLC or the browser and I see IOUserAudioIOOperationBeginRead when recording in Audacity etc.. But when I open the SystemSettings and goto Sound and I select my driver as input I also see calls with IOUserAudioIOOperationWriteEnd which seem to be the just read input data. I can also watch this when starting up Teams. I think the purpose is to add the (mic) input also to the output so you have the chance to listen to yourself. Nevertheless I'd like to fully avoid this but I don't see a way to distinguish between the playback audio data and the input audio data inside this callback. How could I do this? Or even better is there a switch which would completely switch off these callbacks which forward the input to the output? Thanks and best regards, Johannes
2
0
522
Feb ’24
Save/Restore properties on reboot in audio driver based on AudioDriverKit
Dear Sirs, when writing an AudioServerPlugin I can use the hosts WriteToStorage/CopyFromStorage functions to save and restore custom properties on restarting the machine. Are there corresponding functions for an audio driver based on AudioDriverKit? What would be the recommended way to save and restore properties so that they are available again after a reboot in an audio driver based on AudioDriverKit? Thanks and best regards, Johannes
1
0
404
Feb ’24
iOS Audio Lockscreen Problem in PWA
iOS Audio Lockscreen Problem in PWA Description When running a PWA on iOS; playing audio from the lockscreen works as expected until you leave the audio paused for 30 seconds. After this, the audio will cease to function until you return the PWA to the foreground. Reproduction In a PWA, create an HTML 5 audio element. Load an audio file into it. Set navigator.mediaSession data and action handlers for play and pause. Everything is in working order and your audio plays and pauses from the lock screen. Pause your audio and wait for 30 seconds. Now, press the play button. Your audio will no longer function. At this point, the only way to get the audio to function again is to open the PWA into the foreground. Once you do this, the audio will be in working order. What is expected In step number 6, when you press the play button, the audio should play. The lock screen audio should not enter a non-functional state or there should be some way to "wake up" the PWA. Closing If you follow these steps exactly on Android, you will see that the problem does not exist on those devices.
2
0
254
3w
Metal Performance Shader color issue with yCbCr buffer
I'm making an app that reads a ProRes file, processes each frame through metal to resize and scale it, then outputs a new ProRes file. In the future the app will support other codecs but for now just ProRes. I'm reading the ProRes 422 buffers in the kCVPixelFormatType_422YpCbCr16 pixel format. This is what's recommended by Apple in this video https://developer.apple.com/wwdc20/10090?time=599. When the MTLTexture is run through a metal performance shader, the colorspace seems to force RGB or is just not allowing yCbCr textures as the output is all green/purple. If you look at the render code, you will see there's a commented out block of code to just blit copy the outputTexture, if you perform the copy instead of the scaling through MPS, the output colorspace is fine. So it appears the issue is from Metal Performance Shaders. Side note - I noticed that when using this format, it brings in the YpCbCr texture as a single plane. I thought it's preferred to handle this as two separate planes? That said, if I have two separate planes, that makes my app more complicated as I would need to scale both planes or merge it to RGB. But I'm going for the most performance possible. A sample project can be found here: https://www.dropbox.com/scl/fo/jsfwh9euc2ns2o3bbmyhn/AIomDYRhxCPVaWw9XH-qaN0?rlkey=sp8g0sb86af1u44p3xy9qa3b9&dl=0 Inside the supporting files, there is a test movie. For ease, I would move this to somewhere easily accessible (i.e Desktop). Load and run the example project. Click 'Select Video' Select that video you placed on your desktop It will now output a new video next to the selected one, named "Output.mov" The new video should just be scaled at 50%, but the colorspace is all wrong. Below is a photo of before and after the metal performance shader.
3
0
238
3w
PiP not launching from a WKWebview Sandboxed app
Hi, I am developing an app that has a WKWebView and it can open sites like Youtube. The app is sandboxed as it is meant to be uploaded to the mac App Store. It has a feature PiP where we start the native PiP by calling a browser Javascript where we tell the WKWEBView to fire the PiP. It works well when we are running the code from XCODE in Debug scheme. When we run the code from release mode by archiving it or directly from the build folder, the WKWebView is not able to fire the PiP Agent and thus the Native PiP window is not visible, while the site shows that PiP is opened and we can here the sound being played. But PiP window is not visible. I cannot see PiPAgent in activity monitor. Why does it not work from within the release build outside xcode. But when I try to run the build directly from the Finder in builds folder, this PiP feature does not work. Request technical help for this. Thanks!
1
1
298
Jul ’24
Bluetooth and microfone
Whenever I have any bluetooth devices connected (radio, car, earphones) and want to record a voice message, the phone assumes I am recording from those devices, both in the messages app and any other app. Half of those devices I own don’t even have a microphone, then no message gets recorded. Can you implement a choice of microphone to be used when recording something? Some apps don’t even have the option to pick the audio output, which is annoying, but having to disable bluetooth to record something is definitely worse.
1
0
157
3w
How do we detect SCContentSharingPicker is cancelled?
Hello, I am trying to make use of SCContentSharingPicker for my app and I wonder how I can detect a close event of SCContentSharingPicker. I could open the picker screen with following simple code: SCContentSharingPicker.shared.isActive = true SCContentSharingPicker.shared.add(self) SCContentSharingPicker.shared.present() And I closed it with "Cancel" button located at the top right corner. Initially I was expecting to get a event through an observer like below but realised that it's called when a stream is canceled. extension ContentPickerButton: SCContentSharingPickerObserver { func contentSharingPicker(_ picker: SCContentSharingPicker, didCancelFor stream: SCStream?) { logger.info("Picker canceled for stream \(stream)") } I would like to get a picker close event so that I can deactivate the picker. (Otherwise, camera icon will stay alive at the tray.) How do we get a close event?
1
0
235
Aug ’24
In iOS 18, the seek bar operation in a music player app is not working
I have developed and operated a music player app, but when I installed the iOS 18 public beta version on my device and checked the app's operation, I found that the seek bar stops immediately after starting playback, and I cannot change the playback position on the seek bar. Checking the logs, the following error is output when the seek bar stops: ERROR AudioQueueCreateTimeline status=1953330284 This is a value I have never seen before, and this issue did not occur in iOS 17 or earlier. I would like to know if this issue can be resolved, and if not, how I should handle it.
2
1
160
3w
Enabling EDR on Apple TV for custom HDR video playback with VideoToolbox
How is it possible to enable EDR on Apple TV without AVFoundation for custom HDR video playback? The use case is a custom video player for HDR playback via VideoToolbox and Metal, which seem to render colors correctly on iOS but not on tvOS. All related documentation and WWDC sessions describe APIs that are unavailable for tvOS: let metalLayer = CAMetalLayer() metalLayer.wantsExtendedDynamicRangeContent = true metalLayer.edrMetadata = CAEDRMetadata.hdr10(minLuminance: 0.0, maxLuminance: 1000, opticalOutputScale: 100) What's the alternative path for tvOS to have correct system tone mapping for a setup like: metalLayer.pixelFormat = .rgba16Float // (or .bgr10_xr) metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_PQ) Video format: HEVC, YUV 4:2:0 10bit, BT.2020 PQ. We do set the preferredDisplayCriteria on AVDisplayManager and thus video range matching is in place. WWDC Ref: https://developer.apple.com/videos/play/wwdc2022/110565?time=557
1
1
202
3w