Search results for

Popping Sound

19,349 results found

Post

Replies

Boosts

Views

Activity

Errors reading not-yet-sync'd iCloud files get cached
I have an app which uses ubiquitous containers and files in them to share data between devices. It's a bit unusual in that it indexes files in directories the user grants access to, which may or may not exist on a second device - those files are identified by SHA-1 hash. So a second device scanning before iCloud data has fully sync'd can create duplicate references which lead to an unpleasant user experience. To solve this, I store a small binary index in the root of the ubiquitous file container of the shared data, containing all of the known hashes, and as the user proceeds through the onboarding process, a background thread is attempting to prime the ubiquitous container by calling FileManager.default.startDownloadingUbiquitousItemAt() for each expected folder and file in a sane order. This likely creates a situation not anticipated by the iOS/iCloud integration's design, as it means my app has a sort of precognition of files it should not yet know about. In the common case, it works, but there is a corner
1
0
71
Aug ’25
Reply to Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
When I talk from a neutral state (no one is speaking), the system plays the standard microphone activation tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Can you file a bug about the second (no tone) case and post the bug number back here? That's not what I expected and may be a bug. Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. That assumption is incorrect. It shouldn't be long, but there will be a delay. What's actually going on here is callservicesd releasing audio input to your app, which does cause a short delay. I believe the delay is roughly the same as unmuting a CallKit call. One thing to understand here is that, just like CallKit*, the PTT audio session is NOT actually a standard PlayAndRecord session. It can do things that the standard PlayAndRecord cannot (for example, it CANNOT be interrupted by
Topic: Media Technologies SubTopic: Audio Tags:
Aug ’25
Audio driver based on AudioDriverKit sometimes hangs after sleep
Dear Sirs, I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook. I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side. We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem. Best regards, Johannes
1
0
89
Aug ’25
Reply to Prevent SSL Handshake with User Installed Certificates
[quote='851708022, ka_taa_na, /thread/795245?answerId=851708022#851708022, /profile/ka_taa_na'] so how we can do the pinning? [/quote] I can’t offer advice specific to your third-party hosting service. Given that you have documentation about the features offered by NSPinnedDomains, I recommend that you discuss your hosting provider. [quote='851708022, ka_taa_na, /thread/795245?answerId=851708022#851708022, /profile/ka_taa_na'] can't I exclude user installed certificates? [/quote] Not with the current NSPinnedDomains feature set. But, hey, that sounds like a perfectly reasonable enhancement request for that API. If you file such an ER, please post your bug number, just for the record. You could add your own additional checks by handling the NSURLAuthenticationMethodServerTrust authentication challenge, but that has some caveats: There’s no good way to determine whether the chain of trust leads to a built-in CA [1]. This technique only works for APIs, like URLSession, that let you customise HTTPS serve
Aug ’25
Reply to Signing Certificate for AU Plugin
There are many different types of audio unit plug-ins. Most folks who ask questions like this are working with old school plug-ins (for example, .component and .vst3) rather than the new app extension (.appex) stuff. Old school plug-ins are only supported on the Mac and are directly distributed, rather than going through the App Store. That means they use Developer ID signing. IMPORTANT Developer ID signing identities are precious. See The Care and Feeding of Developer ID. As to how you sign them, that depends on what tools you’re using to build them. If you’re using Xcode then it will happily sign them for day-to-day development, but the standard Xcode workflows may not work for distributing audio unit plug-ins. In most cases you’ll need to sign them manually. The exact process for that depends on how your plug-in is laid out, but you’ll find general advice in: Creating distribution-signed code for macOS Packaging Mac software for distribution If you have specific follow-up questions, feel
Aug ’25
Reply to macOS26: MenuBarExtra item not showing
The helper has no UI because it’s not an app, it’s just a standalone executable. The helper has a UI, allowing to set the app parameters and show the activities, but the only way to access it for the user is through the menu bar icon. We can pop up the main window through a terminal command, and the UI is ok as well on macOS 26. For the other statements, yes, it is correct.
Aug ’25
C++ and Swift in Xcode 16 broke my audio unit
I'm developing an audio unit for use on iOS. The AUv3 worked fine with xcode 15.X and swift 5.X. I recently tried to submit an update to my plug-in but Apple refused submission because my Xcode was not the latest. Now that I'm on Xcode 16.4 I can't get my project to compile, even when following all of the same previous steps. As one example of a change, Xcode doesn't appear to include the “C++ and Objective-C interoperability” build setting that it used to. This setting is noted in the Swift documentation and I used to need it, https://www.swift.org/documentation/cxx-interop/project-build-setup/#mixing-swift-and-c-using-xcode Currently my C++ code can't see anything from Swift, and I get a Use of undeclared identifier 'project_name'. I've selected Switch support for version 5.0 in an attempt to minimize changes from Apple. My process is I generate an Xcode project file from my audio plugin support, JUCE. Then I add in the swift files, click yes to create bridging headers, but c++ doesn't see
3
0
294
Aug ’25
Problem receiving Remote Notification in the background after Review Rejected
I created an app. One if its functionalities is receive Remote Notification in the background, while app is monitoring Significant Location Changes (SLC). This functionality worked fine. I was receiving these notifications correctly. Sometimes instantly, sometime with small or large delay. And then I send the app for review. It was rejected with 3 remarks: The app or metadata includes information about third-party platforms that may not be relevant for App Store users, who are focused on experiences offered by the app itself (I wrote that app communication works both for iOS and Android.) The app declares support for audio in the UIBackgroundModes key in your Info.plist but we are unable to locate any features that require persistent audio. EULA (End User License Agreement) is missing for in-app purchases. After the rejection the app is no longer receiving these notifications. They are there, since the app receives them, when I open app, or significant location change is detected. It also wo
2
0
134
Aug ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured
4
0
297
Aug ’25
Reply to Disabling font Anti-Aliasing in a Text in SwiftUI
I really don’t appreciate answers along the lines of “Why would you want to do this? You don't need it.” That kind of response is exactly why people are increasingly turning to AI tools for help — because they aim to assist without being dismissive or patronizing. I asked the question because I do notice a difference. I’m currently using a bitmap-style font from this site: https://int10h.org/oldschool-pc-fonts/fontlist. They provide real bitmap fonts, but macOS doesn’t support them natively, so I'm forced to use outline versions instead. Creating a custom font won't help in this case — it won't behave differently regarding anti-aliasing. Here’s the context: I’m developing a music app for macOS that simulates the look of old audio equipment — specifically the monochrome dot matrix LCDs and VFDs. The font is rendered at a specific size such that each ‘fake bitmap pixel’ maps to a 3×3 block of screen pixels. I then apply a mask so that only the 4 top-left pixels of each block are visible, mimicking the
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Aug ’25
AlarmKit Not Playing Custom Sound or Named System Sounds - iOS 26 beta 4 (23A5297i)
I'm using the new AlarmKit framework to schedule and trigger alarms in my Swift app in iOS 26 beta 4 (23A5297i). I'm trying to customize the alarm sound using a sound file embedded in the app bundle or by referencing known system tones. Problem: No matter what I pass to .named(sound-2), whether a file bundle url, .named(sound-2.caf), tried .mp3, .caf & .aiff, or a known iOS system sound like .named(Radar) (Chimes, etc.), the alarm always plays the default system alert tone. There's no error or warning, but the custom or specified sound is silently ignored. sound: .named(sound-2) Question: What is the correct method or approach to play custom sound / music when Alarm Triggers? What .named(...) expects file name, file Path URL or System sound name? Is there any specific audio file length accepted or specific format? Challenge: The alarm functionality feels incomplete without support for custom sounds.
4
0
138
Aug ’25