Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (&l
Search results for
Popping Sound
19,739 results found
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
you said selected it as a preview device - what do you mean here? It sounds like you selected your model of phone for the simulator. In the middle top of the Xcode window, it shows your target name a run destination. By default, that run destination for an iOS app is a simulator. Plug in your phone. If it doesn't appear in the popup menu as a run destination, choose Manage run destinations... from that menu. It should show up as discovered in the list on the left of the Run Destinations window. The first time you pair the phone with Xcode takes quite a while (several minutes for me).
Topic:
Code Signing
SubTopic:
Certificates, Identifiers & Profiles
So, the first thing to understand is that what you're describing here: Our app receives a CallKit VoIP call. When the user taps “Answer”, the app launches and automatically connects to a real-time audio session using WebRTC or MobileRTC. ...is NOT what's actually happens on iOS. Your app doesn't receive a CallKit call, nor is CallKit something that really controls how your app works. This is how incoming voip pushes actually work: The device receives an voip push for your app. The system either launches or wakes your app (depending on whether or not your app is running). Your app receives the voip push. Your app reports a new call into CallKit. The system present the incoming call UI, sending updates back to your app about the actions the user takes in that UI. If the user answers the call, the system activates the audio session you previously configured. The critical thing to understand here is that CallKit is best understood as an interface framework (albiet a very narrowly focused one), N
Topic:
App & System Services
SubTopic:
General
Tags:
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: PCI-424) else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout.size * ch
Topic:
Media Technologies
SubTopic:
Audio
[quote='868001022, Hylus-Arbor, /thread/808897?answerId=868001022#868001022, /profile/Hylus-Arbor'] maybe the Accessibility is compromised in the same way with 26.1 [/quote] Yes. This bug seems to affect a wide range of privileges controlled by System Settings > Privacy & Security. I haven’t explored the full list, but others have noticed that it also affects Screen & System Audio Recording, so it wouldn’t surprise me if it affected Accessibility as well. [quote='868001022, Hylus-Arbor, /thread/808897?answerId=868001022#868001022, /profile/Hylus-Arbor'] I updated my own Mac from 15.7 to 26.1. [/quote] My recommendation is that you test this stuff in one or more VMs. That way you have direct control over what system software is installed, and you can revert to a known ‘clean’ snapshot between each test. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com
Topic:
Privacy & Security
SubTopic:
General
Tags:
Hi team, I’m having an issue with my iOS app related to local network communication and connecting to a Wi-Fi speaker. My app works similar to the “4Stream” application. The speaker and the mobile device must be on the same Wi-Fi network so the app can discover and connect to the speaker. What’s happening: When I run the app directly from Xcode in debug mode, everything works perfectly. The speaker gets discovered. The speaker gets connected successfully. The connection flow completes without any problem. But when I upload the same build to TestFlight, the behaviour changes completely. The app gets stuck on the “Connecting…” screen. The speaker is not discovered. But the same code is working fine on Android It never moves forward from that state. So basically: Debug Mode: Speaker is detected and connected properly TestFlight: Stuck at “Connecting…”, speaker does NOT get connected This makes me believe something related to local network access, multicast, Wi-Fi info permissions, or Bonjour discovery is not bei
Hello! Thanks for taking the time to write about this. This doesn't sound right, and is probably a bug. But to be sure, we need some more information to investigate. Can you please file a bug report and include code sample sysdiagnose logs video recording to the following link? https://developer.apple.com/bug-reporting/ The site will walk you through these steps. You can reply what the Feedback ID is for this issue and I'll make sure it gets to the right team to investigate.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Huh. Out of curiosity, did APFS always do that to prevent that issue? No, at least not entirely. I don't remember if it was an issue on macOS (where booting support wasn't added until later), but there was a short interval in iOS 8 where it was possible to completely fill an iOS device. However, the end result of that wasn't quite as bad as I made it sound (or it theoretically could be). The nature of COW also means that in a normal I/O flow, the I/O request that uses the last available space is effectively guaranteed to be the writes for the next/new data that's going to be written. So what actually ended up happening was: The kernel panicked, since APFS couldn't really do anything else. The file system was left in a dirty state (since it didn't unmount). The fsck prior to remount found that pending data and cleared it (since that's all it could really do). ...which then freed up enough space for the file system to be functional again, at least enough that you could delete files and clear the issue.
Topic:
App & System Services
SubTopic:
Core OS
Tags:
The apps mentioned aren't CarPlay apps. For audio from NOT CarPlay apps it is generally best to utilize Bluetooth. Hopefully this helps. Rico Car Experience - Software Engineer
Topic:
Community
SubTopic:
Apple Developers
Tags:
For Notification Tones, Audio session category is configured to AVAudioSessionCategoryPlayAndRecord with VoiceChat mode and AllowBluetooth option. Tones play using system audio services without an active audio session. No background audio session activation occurs for notification tones. Are you talking about using AudioServicesPlayAlertSound()/AudioServicesPlaySystemSound()? If so, then the answer here is basically no, you can't change any of the details of how those APIs behave. Both of those APIs work by shifting the actual playback out of process, instead of having your app directly play the audio. This allows them to be used in apps that don't have ANY audio configuration at all, as well as in contexts that would otherwise be complex/problematic. For example, it lets apps with arbitrarily complex and configurable audio configuration play alert sounds without having to figure out how to mix and route that alert sound with the
Topic:
App & System Services
SubTopic:
Core OS
Tags:
Sounds like you are running up against the anthropic usage limitation. There is no metric data on what the trigger limit actually is, but I find myself clapped up against this about once a month or so (depending on how heavily I've been relying on the code assistant). Anthropic has an unlimited tier but it's something like $200 per month, so...
Topic:
Developer Tools & Services
SubTopic:
Xcode
How are you doing this? The audio system should not be allowing PlayAndRecord to directly activate in a background app. For Notification Tones Audio session category is configured to AVAudioSessionCategoryPlayAndRecord with VoiceChat mode and AllowBluetooth option Tones play using system audio services without an active audio session No background audio session activation occurs for notification tones
Topic:
App & System Services
SubTopic:
Core OS
Tags:
Is there any way to use the hardware RF reading capabilities of an iPhone to read ISO15693 RF tags silently, and without a UI pop-up? Perhaps using other native iOS libraries than the NFC library? If not, is there a way for a business to request this feature be allowed in internally used apps only?
Topic:
App & System Services
SubTopic:
Drivers
Tags:
Our app receives a CallKit VoIP call. When the user taps “Answer”, the app launches and automatically connects to a real-time audio session using WebRTC or MobileRTC. We would like to confirm whether the following flow (“CallKit Answer → app opens → automatic WebRTC or MobileRTC audio session connection”) complies with Apple’s VoIP Push / CallKit policy. In addition, our service also provides real-time video-class functionality using the Zoom Meeting SDK (MobileRTC). When an incoming CallKit VoIP call is answered, the app launches and the user is automatically taken to the Zoom-based video lesson flow: the app opens → the user is landed on the Zoom Meeting pre-meeting room → MobileRTC initializes immediately. In the pre-meeting room, audio and video streams can already be active and MobileRTC establishes a connection, but the actual meeting screen is not joined until the user explicitly taps “Join”. We would like to confirm whether this flow for video lessons (“CallKit Answer → app
Application is using AVAudioSessionCategoryPlayAndRecord category and How are you doing this? The audio system should not be allowing PlayAndRecord to directly activate in a background app. But AVAudioSessionCategoryOptionAllowBluetoothHFP Option is not defined in AVAudioSessionCategoryOptions What do you mean? It's right here. Is there any option to use as alternative of AVAudioSessionCategoryOptionAllowBluetooth option? No, but that's because AVAudioSessionCategoryOptionAllowBluetooth and AVAudioSessionCategoryOptionAllowBluetoothHFP are exactly the same thing. From AVAudioSessionTypes.h, after you remove the availability markings: /// Deprecated - please see ``AVAudioSessionCategoryOptionAllowBluetoothHFP`` AVAudioSessionCategoryOptionAllowBluetooth = 0x4, ... /// - Other categories: /// AllowBluetoothHFP defaults to false and cannot be changed. Enabling Bluetooth for input in /// these categories is not allowed. AVAudioSessionCategoryOptionAllowBluetoothHFP = 0x4, In other words, AVAudioSessionCa
Topic:
App & System Services
SubTopic:
Core OS
Tags: