Search results for

Popping Sound

19,349 results found

Post

Replies

Boosts

Views

Activity

Reply to macOS VPN apps outside of the App Store
[quote='854282022, juozas, /thread/797007?answerId=854282022#854282022, /profile/juozas'] Or should I file a separate one? [/quote] It’s probably better to do this. Assuming that I’ve understood you properly (-: It sounds like you want installer packages to be able to install and activate system extensions. If so, that’s a question for the installer team, as opposed to the NE team, and so a separate bug report makes sense. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com
Aug ’25
Reply to Using CBPeripheralManager while using AccessorySetupKit framework
I've been pulling my hair out, even after pulling everything out in to a on a simple Multi-platform project, with two demos. ASK and Not ASK. If ASK isn't completely branched it burns BT radios. In the flagship sample project private static let pinkDice: ASPickerDisplayItem = { let descriptor = ASDiscoveryDescriptor() descriptor.bluetoothServiceUUID = DiceColor.pink.serviceUUID return ASPickerDisplayItem( name: DiceColor.pink.displayName, productImage: UIImage(named: DiceColor.pink.diceName)!, descriptor: descriptor ) }() I only see bluetoothServiceUUID provided, in the docs however: Each display item’s descriptor, a property of type ASDiscoveryDescriptor, needs to have a bluetoothCompanyIdentifier or bluetoothServiceUUID, and at least one of the following accessory identifiers: bluetoothNameSubstring A bluetoothManufacturerDataBlob and bluetoothManufacturerDataMask set to the same length. A bluetoothServiceDataBlob and > bluetoothServiceDataMask set to the same length. It wasn't until I removed bluetoothN
Topic: App & System Services SubTopic: Hardware Tags:
Aug ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured
4
0
297
Aug ’25
Reply to Crash when testing Speech sample app with FoundationModels on macOS 26.0 beta and iOS 26.0 beta
I’ve now upgraded Xcode to the latest available build: Xcode 26.0 beta 5 (17A5295f). This should, in theory, resolve the version mismatch issue you highlighted. Please let me know if I should expect to wait for beta 6 specifically, but since this is the latest release, I assume it covers the ABI/runtime symbol alignment. Current behavior: macOS (Intel MacBook Pro): The app launches successfully, and the window is displayed. Pressing the record button (⭘) results in console logs: localeNotSupported could not record: invalidAudioDataType Playback works (I can hear the recorded sound), but there is no transcription displayed and no highlighting. Play button plays back my speech. iOS (iPhone 11, iOS 26.0 beta): Same as macOS above, except playback volume is extremely faint, even at maximum system volume. Question regarding Dictation: I also tried enabling Edit → Start Dictation from the menu, and accepted the prompt. After that: I don’t see any option to disable it within the app. Turning off Dictation i
Aug ’25
Reply to OpenIntent not executed with Visual Intelligence
I understand how it work now, thank you for your explanation! Sounds like usage of EntityQuery is required by the Intent framework, and Entity for Visual Intelligence is not exempted from it. I understand it allows AppEntity to be more verastile, although personally I think the Entity->ID->Entity retrieval process could be optimized in the case of Visual Intelligence, assuming there's an underlying collectionView that maps each cell to the Entity instance. Thanks again for your help!
Aug ’25
Reply to SensorKit Speech data question
Thank you for your answers, Argun! While most of the answers are very helpful, I am puzzled by the one regarding duplicated sessions. As you can see in the screenshot, for example, session ID 5B155CE8-6AA9-4A3F-BCD0-9D88AF69F196;1 was linked to all three different classifications (laughter, shouting, speech), which is the opposite of each sound classification within the same utterance or call will output a separate identifier explanation. It's the same case for all the other session IDs ; and all these session IDs are linked with the same time stamp and the same start time. And it's not uncommon, below is another example from the same subject. I marked the records with same time with the same name; note that the blue records are again the same session ID linked with three different classifications. And in general why would a same segment be classified to different categories? Does that mean all of them are possible but can't be sure? Therefore, is there any recommended cut-off of the confidence (e.g.
Topic: App & System Services SubTopic: General Tags:
Aug ’25
Reply to SensorKit Speech data question
Let me try to clarify some of this: Microphone Activation: Q: how is the microphone being turned on to capture a speech session? A: Speech Metrics are collected when the user has already engaged the microphone - through a Siri utterance or through telephony (a VoIP app, the Phone app, FaceTime). SensorKit does not manipulate the microphone itself Q: how is each session determined to be an independent session? A: each session mainly marks a Siri utterance or a phone call. But changing system conditions with the audio subsystem during a session (a long phone call where the user starts a call, switches to a Bluetooth headset, and perhaps connects to a car audio, for example) may change session ids. Negative Values: Q: In the speech classification data, there are entries where some of the start and end values are negative A: this is not expected behavior, and we would need to see what exactly is being sent to your app. Please file a Feedback Report with as much details of the occurrence, and log
Topic: App & System Services SubTopic: General Tags:
Aug ’25
Reply to Dynamic Library cannot call exposed C function
@DTS Engineer after much tweaking I've managed to reduce the working configuration to: s.user_target_xcconfig = { 'STRIP_STYLE' => 'non-global' } This works but sets the stripping style of the whole user project. I took a look at the output of Xcode build phases for the pod target and -exported_symbols_list doesn't work with cocoapods (out of the box) because cocoapods generates static lib by default. However setting s.static_framework = false did produce a dynamic framework but the symbols still are stripped. I'm not sure what other consequences STRIP_STYLE would have... sounds like a setting this only for my library is a bad idea, but I'm out of ideas on how to keep the symbols. I also tried passing each symbol directly to the linker and that also did not work s.user_target_xcconfig = { 'OTHER_LDFLAGS' => '$(inherited) -Wl,-u,_ios_prepare_request -Wl,-u,_ios_set_request_header -Wl,-u,_ios_present_webview -W...' }
Topic: Code Signing SubTopic: General Tags:
Aug ’25
Reply to Beta 5 hardEdge Scrollstyle blurs my whole table
When Apple compensates me hourly at your salary to devise sample projects for the bugs I find in your new stuff, maybe I'll take the time to do so. I have a day job, and my hobby is having a top 10 iOS app. I'm already working stupid hours to make your stuff look good (and have had the most fun since you made everything flat in iOS 7 and killed a bunch of joy so that's not actually a complaint). My report was mostly to help others with a workaround if they got themselves in that state. I mean, maybe I'm doing something weird here. I've isolated the bugs I care about in sample projects. It usually takes at least an hour to do a good job because I have a very complex view hierarchy. I'm not making a sample project for every bug for a trillion dollar company. That sounds like work, not something I should spend time away from my family for.
Topic: UI Frameworks SubTopic: UIKit Tags:
Aug ’25
Xcode 26: can’t get past the agreement acceptance pop-up
I had this happen last year too. Can’t remember how I solved the issue. I think I just had to wait and the 3rd beta opened properly. I went into terminal and accepted the Xcode agreement there, but still no go. I'm considering reinstalling MacOS because it doesn't seem like others are having this issue. Any ideas? I double click on the Xcode beta, the agreement screen pops up. I click on agree and I either double click my watch button or type in my password and then nothing. The pop-up doesn't disappear and Xcode never opens. I'm on MacOS Sequoia 15.5.
4
0
141
Jun ’25
Reply to Unable to discover the BLE device which is in range
How are you identifying these particular devices? And is your app in the foreground when you are not able to identify? This problem sounds like one of the two typical cases we see, 1- The advertised name and the GAP name of the devices are different. When the device has never been seen before, CoreBluetooth will report the advertised name. Once connected, the GAP name of the device will be cached, and the next encounter, CoreBluetooth will report the cached GAP name instead of the advertised name. If these names are different, and you are specifically looking for a match in the peripherals name, this could be the mismatch. I would suggest to either make both names the same, or look for either of the possible names. 2- what is the advertising rate of the Wiser device? When your app is not in the foreground, the scan rate will dramatically drop down, and if the peripheral is not advertising fast enough, 3 minutes may not be enough to reliably detect the advertising. To reliably detect the device under
Topic: App & System Services SubTopic: Core OS Tags:
Aug ’25
General > Login Items > Allow in Background (User visible item names) in Ventura
In the latest beta of Ventura (and perhaps earlier versions) there is a section of the System Settings > General > Login Items pane called Allow in Background. It appears that helpers (LaunchAgents/LaunchDaemons) that are installed by apps are listed here. As you can see in the screenshot below, I have 3 such items installed on my test system. The per User LaunchAgent for the Google Updater, the WireShark LaunchDaemon for the ChmodBPF script, and the LaunchDaemon for my userspace CoreAudio Driver (labelled Metric Halo Distribution, Inc.). The WireShark and Google Updater have nice user identifiable names associated with them, whereas my Launch Daemon only has my company name associated with it. I don't see anything in the plists for Wireshark or GoogleUpdater that seem to specify this user-visible string, nor in the bundles the plists point to. How do I go about annotating my LaunchDaemon plist or the helper tool's plist so that the string in this pane helps the user properly identify what this Backgrou
5
0
2.2k
Aug ’25