Hi! Is there any fix: Sounds are not recreated while using websites with, for example, virtual piano keyboard or metronome.
Search results for
Popping Sound
19,356 results found
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am implementing the new Push to talk framework and I found an issue where channelManager(:didActivate:) is not called after I immediately return a NOT NIL activeRemoteParticipant from incomingPushResult. I have tested it and it could play the PTT audio in foreground and background. This issue is only occurring when I join the PTT Channel from the app foreground, then kill the app. The channel gets restored via channelDescriptor(restoredChannelUUID:). After the channel gets restored, I send PTT push. I can see that my device is receiving the incomingPushResult and returning the activeRemotePartipant and the notification panel is showing that A is speaking - but channelManager(:didActivate:) never gets called. Thus resulting in no audio being played. Rejoining the channel fixes the issue. And reopening the app also seems to fix the issue.
I realized I never got back to this post to show what I ended up with. Like I mentioned before, I was thinking of putting the audio player in shared application data. It looks like this ... import SwiftUI import Observation import os @preconcurrency import AVFoundation @main struct Chain_TimerApp: App { @State private var appData = ApplicationData.shared var body: some Scene { WindowGroup { ContentView() .environment(appData) } } } @Observable class ApplicationData: @unchecked Sendable { var timers: [TimerData] = [] var timerRunningStates: [UUID: Bool] = [:] var isSerial: Bool = false var audioData: AudioData let logger = Logger(subsystem: Bundle.main.bundleIdentifier ?? Chain Timer, category: ApplicationData) static let shared: ApplicationData = ApplicationData() . . . For the AudioData structure struct AudioData { var audioPlayer: AVAudioPlayer var sound: String = classicAlarm var logger: Logger init(logger: Logger) { self.logger = logger audioPlayer = AVAudioPlayer() } mutating func setSo
Topic:
UI Frameworks
SubTopic:
SwiftUI
Tags:
Subject: Clarification on Speech Recognition Capability Requirement for iOS Hi Quinn, The Eskimo Thank you for your reply, and I really appreciate your time. To clarify — I was referring to Apple’s official documentation, including: Asking Permission to Use Speech Recognition https://developer.apple.com/documentation/speech/asking-permission_to_use_speech_recognition Recognizing Speech in Live Audio https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio While these documents don’t explicitly mention the need to enable the Speech Recognition capability in the Developer Portal, I’ve come across several trusted sources that do suggest it’s required for full and stable functionality. For example: Apple Developer Forum: Thread discussing Speech Framework entitlement https://developer.apple.com/forums/thread/116446 Stack Overflow: Speech recognition capability and entitlement setup https://stackoverflow.com/a/43084875 Both of these sources explain that enabling the Speech Recogni
Topic:
Developer Tools & Services
SubTopic:
General
Tags:
I’d like to clarify your requirements here: [quote='786314021, PixelRothko, /thread/786314, /profile/PixelRothko'] two users see in real time, where the other one touches their screen. [/quote] It sounds like: There are two users. Each has their own device. On which they run your app. You want to display some UI… … that’s shared between the users… … so that user action on one device are immediately visible on the other device. Is that right? If so: What sort of devices? iOS devices? Macs? Or something else? How far apart are the users? Are you assuming that the users are close to each other? On the same Wi-Fi network? Or can the users be on different sides of the planet? Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
I was about to post a similar question, since I've been struggling with this exact issue for the last 3 days. The default app clip does not pop up when invoked from a QR code or NFC with a URL of the associated domain. I've read and re-read the docs (and re-watched all the app clip WWDC sessions) and they never directly address this. I'm suspecting that yes, Advanced App Clip Experiences are needed to use associated domains. The docs say: A default App Clip experience is required and can be invoked using Safari and Messages, as well as through an NFC tag, QR code, and links in other apps when using an Apple-provided App Clip link. Incredibly confusing. Why do Safari and Messages support the associated domain and not NFC or QR? And why not just say that in simple terms? I'd love for someone from Apple to confirm what the expected behavior is.
Topic:
App Store Distribution & Marketing
SubTopic:
General
Tags:
I've gotten to the point where I can use the mount(8) command line tool and the -t option to mount a file system using my FSKit file system extension, in which case I can see a process for my extension launch, probe, and perform the other necessary actions. However, when plugging in my USB flash drive or trying to mount with diskutil mount, the file system does not mount: $ diskutil mount disk20s3 Volume on disk20s3 failed to mount If you think the volume is supported but damaged, try the readOnly option $ diskutil mount readOnly disk20s3 Volume on disk20s3 failed to mount If you think the volume is supported but damaged, try the readOnly option Initially I thought it would be enough to just implement probeExtension(resource:replyHandler:) and the system would handle the rest, but this doesn't seem to be the case. Even a trivial implementation that always returns .usable doesn't cause the system to use my FSModule, even though I've enabled my extension in System Settings > General > Login Items & Ex
Offshore development can definitely be more affordable, but concerns about quality and communication are real. I’ve worked on quite a few outsourced projects, and the key is really about how you manage the process. Clear expectations, solid QA, and alignment go a long way. I recently wrote a post about common outsourcing mistakes and how to avoid them. If you're curious, you can find it by searching “mistakes in outsourcing software development site:nexadevs.com” on Google, it should pop right up. Hope that helps while you’re figuring out the best approach for your project!
Topic:
Developer Tools & Services
SubTopic:
General
Tags:
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
I was referring to the official Apple documentation for the Speech framework, specifically: Asking Permission to Use Speech Recognition https://developer.apple.com/documentation/speech/asking-permission_to_use_speech_recognition Recognizing Speech in Live Audio https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio While these pages do not explicitly mention enabling the Speech Recognition capability in the Developer Portal, several other trusted sources — including community forums and prior Apple Developer Tech Support responses — have indicated that adding this capability in Xcode (which creates the com.apple.developer.speech entitlement) is required for full and stable functionality. If that is no longer the case, I would be deeply grateful for your clarification — especially since we are still experiencing permission issues and silence from the speech plugin in a properly configured app. Could you please help me with a way to set up speech on my app?
Topic:
Developer Tools & Services
SubTopic:
General
Tags:
We need to be clear about terminology. You wrote: [quote='786048021, wangshibo, /thread/786048, /profile/wangshibo'] it was found that the app was forcefully quit by the system [/quote] Force quit isn’t really a term of iOS, but most folks use it to mean that the user removed the app from the multitasking UI by swiping up. That’s not what’s happening here. Rather, it sounds like: The user moved your app to the background. Shortly thereafter, the system suspended it. After a while the system starting running short on memory, and so it terminated your app. This is expected behaviour on most Apple platforms (everything except macOS). Your app is expected to handle it properly. [quote='786048021, wangshibo, /thread/786048, /profile/wangshibo'] subsequent operations could not be carried out. [/quote] Can you elaborate on what those operations were? Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
A virtual device refers to the use of AudioServerPlugIn to create a nullAudio, similar to the open-source BlackHole. As you mentioned: for synchronized audio, the CoreAudio API AudioHardwareCreateAggregateDevice is used to create an aggregate device. I did this, and it was successful. However, when I switch the microphone mode to the voice enhancement mode, the number of input channels I retrieve from the aggregate device is incorrect. The specific issue can be seen in the following two help posts: https://stackoverflow.com/questions/79319480/mac-audio-software-aggregate-device-issue https://developer.apple.com/forums/thread/771690 Therefore, I had to abandon the aggregate device and use separate devices to capture and forward the audio. But this led to synchronization issues. @Systems Engineer
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Please review the following document and sample to make sure you're doing all the right stuff for requesting permission to use speech recognition: Asking Permission to Use Speech Recognition Recognizing speech in live audio If that doesn't help and you're still getting a code signing error, maybe consider opening a code level support request so I can review your code settings. If you open a support request, please include a link to this thread so the request will be routed to me.
Topic:
Developer Tools & Services
SubTopic:
General
Tags:
Hello, Can you please elaborate more on what you mean by virtual audio device? Traditionally if you want synchronized audio you'd need to use the CoreAudio API AudioHardwareCreateAggregateDevice to create an aggregate device, then you can assign this as the intended device for the AVAudioEngine.
Topic:
Media Technologies
SubTopic:
Audio
Tags:
I am building banking application which has audio/video and text chat. It is intended for contacting bank support. When user device has auto lock on after 30 seconds, session is ended, and user needs to initiate it again. Will Apple allow this kind of application to have Audio, Airplay, and Picture in Picture or Voice over IP for background modes for this kind of application or it is against Apple rules (per 2.5.4 - https://developer.apple.com/app-store/review/guidelines/)? Chat framework uses Web sockets and SIP.