Search results for

“Popping Sound”

20,034 results found

Post

Replies

Boosts

Views

Activity

Reply to How to enter Picture-in-Picture on background from inline playback in WKWebView
[quote='819235021, jimmykuo, /thread/819235, /profile/jimmykuo'] Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? [/quote] Without user interaction, such as a button press, transitioning to PiP automatically would require AVPlayerViewController to be implemented in the inline video. The issue here is playsinline does not hand the video off to AVPlayerViewController. Once the inline video uses AVPlayerViewController, you can implement automatic switching to PiP by enabling canStartPictureInPictureAutomaticallyFromInline. This is mentioned in Adopting Picture in Picture in a Standard Player As for your testing: visibilitychange is not considered user interaction. Only begin PiP playback in response to user interaction and never programmatically. For example, webkitSetPresentationMode will trigger PiP if it is set as the action of a button. In the situations where the video do
Topic: Safari & Web SubTopic: General Tags:
1w
Unable to capture only the cursor in macOS Tahoe
Precondition: In system settings, scale the pointer size up to the max. Our SCScreenshotManager code currently works in macOS 15 and earlier to capture the cursor at it's larger size, but broke in one of the minor releases of macOS Tahoe. The error it produces now is Failed to start stream due to audio/video capture failure. This only seems to happen with the cursor window, not any others. Another way to get the cursor is with https://developer.apple.com/documentation/appkit/nscursor/currentsystem, but that is now deprecated, which makes me think the capture of the cursor is being blocked deliberately. We see this as a critical loss of functionality for our apps, and could use guidance on what to use instead.
1
0
345
1w
Reply to NEED HELP WITH VOICE MEMOS PLEASE
The Apple Developer Forums are for questions about APIs and features intended specifically for developers. Since it sounds like you went to the Apple Support Community already and were sent here because you are running a beta, then you should file a bug report in Feedback Assistant.
1w
NEED HELP WITH VOICE MEMOS PLEASE
I have a very important voice memo that I have recorded on my iPad, as I was recording it, it seemed to have completely worked, but for some strange reason the voice memo will not play, download to my filess, I am unable to send it to anyone, and it basically just pops up blank every time I just try to share it. I cannot listen to it and have not been able to listen to it. This is the same on both my iPad and my phone, when I went to Apple, they said to come on here and ask for guidance, and that it might be because my regular iPhone is updated normally but on my iPad, which was the Voice Memo was recorded on is updated To beta. please give me some advice and if there’s any way, I could recover even the transcript of the voice message because it is truly so important to me. The sound waves are there and everything and I just don’t understand why it will not play on either device. I have also tried copying it. I have also tried trimming the beginning and it’s like the data is there, but it will not pl
1
0
73
1w
Reply to Code Signing "Invalid", No Reason Given
[quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] I've had issues getting the dmg signed by the Apple notary service [/quote] That text suggests that you’ve misunderstood how notary works. The notary service doesn’t sign your product. Rather, you present it with a distribution-ready product, one that’s already signed, and the notary service checks it and, if all is well, issues a signed ticket. See Notarisation Fundamentals for more about how this process works. As to why the notary service is refusing to notarise your product, it’s hard to say without more info. It sounds like you were able to submit the product and get a response, but the status is Invalid, indicating a problem with your submission. In that case the next step is to look at the notary log. What does it say? See Fetching the Notary Log for info on how to get the log. [quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] The only big change we made this time was switching to Maven [/quote] Ah, Jav
Topic: Code Signing SubTopic: Notarization Tags:
1w
Reply to Xcode 26.4: Regressions in Intelligence features
Hi, I have also not determined a pattern. It's often if I pause for a while. Guaranteed to need a long overnight. But also happened during a session. I agree about the thought bubbles. They are almost pointless if you can't read them. Claude has got slower more recently, may be due to the influx of users due to political affairs. Re: Codex & Claude showing: Up until 26.4 beta 3 and before you selected your agent to use and then from then on you just hit new session/chat. In 26.4 rc there is no choice of active agent. All you do is click new session/chat. When you tap the button a pop over appears requesting the agent you wish to use. It's a very jarring workflow... Hopefully this isn't coming across as too negative. The agentic development flow is amazing, it just needs the warts removed :) I can't reply inline because it limits the characters. Maybe that needs feeding back to the website team. It feels like we don't want people to engage when the replies are artificially curtailed
1w
Reply to Help with visionOS pushWindow issues requested
Hey @drewolbrich, Thank you for filing all of these reports! Having each issue tracked separately is really helpful for our investigations. In terms of workarounds, your suggestions sound reasonable, but I don't have specific workarounds to recommend at this time. If you find anything else that helps you avoid the issue, please share it with the community here. For others encountering similar issues: Even though we're aware of this issue, we still encourage you to open a bug report, and post the FB number here once you do. The specific info you include in your bug report might help our investigation, and filing the bug report allows you to get notified when it is resolved. Bug Reporting: How and Why? Thanks, Michael
Topic: Spatial Computing SubTopic: General Tags:
1w
Reply to Xcode 26.4: Regressions in Intelligence features
First, thank you for taking the time to post these. We really love hearing from our developers because this helps us make the tools better. Re: OAuth — hmm, OK we're investigating. Is there any pattern to this? Do you see this after a certain amount of time? Re: Thinking — interesting, this is good feedback about how you're using thinking as progress tracking. We know about the issue where you can't open the popover until the thinking is done. But, sounds like you really want a don't put thinking in a bubble entirely because that's part of your tracking of the agent progress. Makes sense. It'd be great if you could file a feedback request specifically for this. Re: Slowness to start — yeah, I agree that sounds weird. The only way we have to debug this at the moment is if you attach the contents of your conversation via the Bug button at the bottom of the transcript. Re: Codex & Claude showing — Have you downloaded both agents using the Intelligence settings even if you haven't logged in?
1w
MPS SDPA Attention Kernel Regression on A14-class (M1) in macOS 26.3.1 — Works on A15+ (M2+)
Summary Since macOS 26, our Core ML / MPS inference pipeline produces incorrect results on Mac mini M1 (Macmini9,1, A14-class SoC). The same model and code runs correctly on M2 and newer (A15-class and up). The regression appears to be in the Scaled Dot-Product Attention (SDPA) kernel path in the MPS backend. Environment Affected Mac mini M1 — Macmini9,1 (A14-class) Not affected M2 and newer (A15-class and up) Last known good macOS Sequoia First broken macOS 26 (Tahoe) ? Confirmed broken on macOS 26.3.1 Framework Core ML + MPS backend Language C++ (via CoreML C++ API) Description We ship an audio processing application (VoiceAssist by NoiseWorks) that runs a deep learning model (based on Demucs architecture) via Core ML with the MPS compute unit. On macOS Sequoia this works correctly on all Apple Silicon Macs including M1. After updating to macOS 26 (Tahoe), inference on M1 Macs fails — either producing garbage output or crashing. The same binary, same .mlpackage, same inputs work correctly on M2+. O
1
0
188
1w
ScreenCaptureKit recording output is corrupted when captureMicrophone is true
Hello everyone, I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false In this case, I get a valid, playable .mp4 file. However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off. I've already added the Privacy - Microphone Usage Description (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem. I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream: ScreenRecorder.swift // This is my main SCStreamConfiguration private var streamConfiguration: SCStreamConfiguration { var streamConfig = SCStreamConfi
2
0
697
1w
Reply to ScreenCaptureKit recording output is corrupted when captureMicrophone is true
When captureMicrophone is true, ScreenCaptureKit delivers separate audio sample buffers for app audio and microphone audio through the same stream output delegate. The key detail is that these arrive with different CMFormatDescriptions. A few things to check in your CaptureEngine: Make sure you are distinguishing between the two audio stream types in your stream(_:didOutputSampleBuffer:of:) callback. The type parameter will be .audio for app audio and .microphone for mic audio — these need separate AVAssetWriterInput instances with matching format descriptions. If you are writing both to a single AVAssetWriterInput, the interleaved samples with different sample rates or channel counts will corrupt the container. App audio typically comes at the system sample rate (e.g. 48kHz stereo) while microphone audio may arrive at a different rate depending on the input device. Verify the timing: microphone and app audio timestamps
Topic: Graphics & Games SubTopic: General Tags:
1w
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say CAPTURE and I start recording a video (with audio from the built in mic) and then when the user says STOP I stop the recording.
2
0
834
1w
Reply to Video Audio + Speech To Text
This is actually possible, though it requires a different approach than the typical single-AVAudioEngine setup. The key insight is that iOS allows multiple AVCaptureSession instances to coexist under certain conditions. You can configure two separate audio routes: Use AVCaptureSession with the AirPods as the input device for your speech recognition pipeline. Set the audio session category to .playAndRecord with .allowBluetooth option. For video recording with the built-in mic, use a second AVCaptureSession (or the camera API you are already using). The built-in mic can be explicitly selected as the audio input for this session. The catch is you need to manage the audio session category carefully. The .mixWithOthers option is essential here — without it, one session will interrupt the other. Another approach that avoids the dual-session complexity: use a single AVCaptureSession that captures from the built-in mic for video, and run SFSpeechRecognizer (or the new SpeechAnalyz
Topic: Media Technologies SubTopic: Audio Tags:
1w
Reply to AVAudioEngine fails to start during FaceTime call (error 2003329396)
I hit a very similar issue while building ambient-voice — a real-time speech-to-text macOS app using SpeechAnalyzer. AVAudioEngine.inputNode.installTap() worked fine with built-in mics but silently failed with Bluetooth devices (the tap callback never fired). The root cause is similar to yours: audio session resource conflicts. Our fix was switching from AVAudioEngine to AVCaptureSession. The captureOutput(_:didOutput:from:) delegate fires reliably regardless of audio device state or competing audio sessions. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step — but it is straightforward. For your FaceTime case specifically, AVCaptureSession with .mixWithOthers category option should let you capture mic input without conflicting with the active call audio session. We documented all the audio pitfalls we hit on macOS 26 in our forum post: https://developer.apple.com/forums/thread/819525 The project is open source: https:
Topic: Media Technologies SubTopic: General Tags:
1w
Do watchOS widget reloads in an active workout session count against the daily budget?
https://developer.apple.com/documentation/widgetkit/keeping-a-widget-up-to-date lists a number of exception including The widget’s containing app has an active audio or navigation session. https://developer.apple.com/videos/play/wwdc2021/10048/ mentions: However, there are a few situational exceptions that will make these reloads occur both immediately and budget-free. These are when your container app is foreground to the user or when your app is participating in a user session, like Navigation or Now Playing audio. Does an active workout session in a watchOS app count as your app is participating in a user session, so calls to WidgetCenter.shared.reloadTimelines(ofKind:) are budget-free?
2
0
248
1w
Reply to How to enter Picture-in-Picture on background from inline playback in WKWebView
[quote='819235021, jimmykuo, /thread/819235, /profile/jimmykuo'] Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? [/quote] Without user interaction, such as a button press, transitioning to PiP automatically would require AVPlayerViewController to be implemented in the inline video. The issue here is playsinline does not hand the video off to AVPlayerViewController. Once the inline video uses AVPlayerViewController, you can implement automatic switching to PiP by enabling canStartPictureInPictureAutomaticallyFromInline. This is mentioned in Adopting Picture in Picture in a Standard Player As for your testing: visibilitychange is not considered user interaction. Only begin PiP playback in response to user interaction and never programmatically. For example, webkitSetPresentationMode will trigger PiP if it is set as the action of a button. In the situations where the video do
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Unable to capture only the cursor in macOS Tahoe
Precondition: In system settings, scale the pointer size up to the max. Our SCScreenshotManager code currently works in macOS 15 and earlier to capture the cursor at it's larger size, but broke in one of the minor releases of macOS Tahoe. The error it produces now is Failed to start stream due to audio/video capture failure. This only seems to happen with the cursor window, not any others. Another way to get the cursor is with https://developer.apple.com/documentation/appkit/nscursor/currentsystem, but that is now deprecated, which makes me think the capture of the cursor is being blocked deliberately. We see this as a critical loss of functionality for our apps, and could use guidance on what to use instead.
Replies
1
Boosts
0
Views
345
Activity
1w
Reply to NEED HELP WITH VOICE MEMOS PLEASE
The Apple Developer Forums are for questions about APIs and features intended specifically for developers. Since it sounds like you went to the Apple Support Community already and were sent here because you are running a beta, then you should file a bug report in Feedback Assistant.
Replies
Boosts
Views
Activity
1w
NEED HELP WITH VOICE MEMOS PLEASE
I have a very important voice memo that I have recorded on my iPad, as I was recording it, it seemed to have completely worked, but for some strange reason the voice memo will not play, download to my filess, I am unable to send it to anyone, and it basically just pops up blank every time I just try to share it. I cannot listen to it and have not been able to listen to it. This is the same on both my iPad and my phone, when I went to Apple, they said to come on here and ask for guidance, and that it might be because my regular iPhone is updated normally but on my iPad, which was the Voice Memo was recorded on is updated To beta. please give me some advice and if there’s any way, I could recover even the transcript of the voice message because it is truly so important to me. The sound waves are there and everything and I just don’t understand why it will not play on either device. I have also tried copying it. I have also tried trimming the beginning and it’s like the data is there, but it will not pl
Replies
1
Boosts
0
Views
73
Activity
1w
Reply to Code Signing "Invalid", No Reason Given
[quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] I've had issues getting the dmg signed by the Apple notary service [/quote] That text suggests that you’ve misunderstood how notary works. The notary service doesn’t sign your product. Rather, you present it with a distribution-ready product, one that’s already signed, and the notary service checks it and, if all is well, issues a signed ticket. See Notarisation Fundamentals for more about how this process works. As to why the notary service is refusing to notarise your product, it’s hard to say without more info. It sounds like you were able to submit the product and get a response, but the status is Invalid, indicating a problem with your submission. In that case the next step is to look at the notary log. What does it say? See Fetching the Notary Log for info on how to get the log. [quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] The only big change we made this time was switching to Maven [/quote] Ah, Jav
Topic: Code Signing SubTopic: Notarization Tags:
Replies
Boosts
Views
Activity
1w
Reply to Xcode 26.4: Regressions in Intelligence features
Hi, I have also not determined a pattern. It's often if I pause for a while. Guaranteed to need a long overnight. But also happened during a session. I agree about the thought bubbles. They are almost pointless if you can't read them. Claude has got slower more recently, may be due to the influx of users due to political affairs. Re: Codex & Claude showing: Up until 26.4 beta 3 and before you selected your agent to use and then from then on you just hit new session/chat. In 26.4 rc there is no choice of active agent. All you do is click new session/chat. When you tap the button a pop over appears requesting the agent you wish to use. It's a very jarring workflow... Hopefully this isn't coming across as too negative. The agentic development flow is amazing, it just needs the warts removed :) I can't reply inline because it limits the characters. Maybe that needs feeding back to the website team. It feels like we don't want people to engage when the replies are artificially curtailed
Replies
Boosts
Views
Activity
1w
Reply to Help with visionOS pushWindow issues requested
Hey @drewolbrich, Thank you for filing all of these reports! Having each issue tracked separately is really helpful for our investigations. In terms of workarounds, your suggestions sound reasonable, but I don't have specific workarounds to recommend at this time. If you find anything else that helps you avoid the issue, please share it with the community here. For others encountering similar issues: Even though we're aware of this issue, we still encourage you to open a bug report, and post the FB number here once you do. The specific info you include in your bug report might help our investigation, and filing the bug report allows you to get notified when it is resolved. Bug Reporting: How and Why? Thanks, Michael
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Reply to Xcode 26.4: Regressions in Intelligence features
First, thank you for taking the time to post these. We really love hearing from our developers because this helps us make the tools better. Re: OAuth — hmm, OK we're investigating. Is there any pattern to this? Do you see this after a certain amount of time? Re: Thinking — interesting, this is good feedback about how you're using thinking as progress tracking. We know about the issue where you can't open the popover until the thinking is done. But, sounds like you really want a don't put thinking in a bubble entirely because that's part of your tracking of the agent progress. Makes sense. It'd be great if you could file a feedback request specifically for this. Re: Slowness to start — yeah, I agree that sounds weird. The only way we have to debug this at the moment is if you attach the contents of your conversation via the Bug button at the bottom of the transcript. Re: Codex & Claude showing — Have you downloaded both agents using the Intelligence settings even if you haven't logged in?
Replies
Boosts
Views
Activity
1w
MPS SDPA Attention Kernel Regression on A14-class (M1) in macOS 26.3.1 — Works on A15+ (M2+)
Summary Since macOS 26, our Core ML / MPS inference pipeline produces incorrect results on Mac mini M1 (Macmini9,1, A14-class SoC). The same model and code runs correctly on M2 and newer (A15-class and up). The regression appears to be in the Scaled Dot-Product Attention (SDPA) kernel path in the MPS backend. Environment Affected Mac mini M1 — Macmini9,1 (A14-class) Not affected M2 and newer (A15-class and up) Last known good macOS Sequoia First broken macOS 26 (Tahoe) ? Confirmed broken on macOS 26.3.1 Framework Core ML + MPS backend Language C++ (via CoreML C++ API) Description We ship an audio processing application (VoiceAssist by NoiseWorks) that runs a deep learning model (based on Demucs architecture) via Core ML with the MPS compute unit. On macOS Sequoia this works correctly on all Apple Silicon Macs including M1. After updating to macOS 26 (Tahoe), inference on M1 Macs fails — either producing garbage output or crashing. The same binary, same .mlpackage, same inputs work correctly on M2+. O
Replies
1
Boosts
0
Views
188
Activity
1w
ScreenCaptureKit recording output is corrupted when captureMicrophone is true
Hello everyone, I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false In this case, I get a valid, playable .mp4 file. However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off. I've already added the Privacy - Microphone Usage Description (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem. I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream: ScreenRecorder.swift // This is my main SCStreamConfiguration private var streamConfiguration: SCStreamConfiguration { var streamConfig = SCStreamConfi
Replies
2
Boosts
0
Views
697
Activity
1w
Reply to ScreenCaptureKit recording output is corrupted when captureMicrophone is true
When captureMicrophone is true, ScreenCaptureKit delivers separate audio sample buffers for app audio and microphone audio through the same stream output delegate. The key detail is that these arrive with different CMFormatDescriptions. A few things to check in your CaptureEngine: Make sure you are distinguishing between the two audio stream types in your stream(_:didOutputSampleBuffer:of:) callback. The type parameter will be .audio for app audio and .microphone for mic audio — these need separate AVAssetWriterInput instances with matching format descriptions. If you are writing both to a single AVAssetWriterInput, the interleaved samples with different sample rates or channel counts will corrupt the container. App audio typically comes at the system sample rate (e.g. 48kHz stereo) while microphone audio may arrive at a different rate depending on the input device. Verify the timing: microphone and app audio timestamps
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say CAPTURE and I start recording a video (with audio from the built in mic) and then when the user says STOP I stop the recording.
Replies
2
Boosts
0
Views
834
Activity
1w
Reply to Video Audio + Speech To Text
This is actually possible, though it requires a different approach than the typical single-AVAudioEngine setup. The key insight is that iOS allows multiple AVCaptureSession instances to coexist under certain conditions. You can configure two separate audio routes: Use AVCaptureSession with the AirPods as the input device for your speech recognition pipeline. Set the audio session category to .playAndRecord with .allowBluetooth option. For video recording with the built-in mic, use a second AVCaptureSession (or the camera API you are already using). The built-in mic can be explicitly selected as the audio input for this session. The catch is you need to manage the audio session category carefully. The .mixWithOthers option is essential here — without it, one session will interrupt the other. Another approach that avoids the dual-session complexity: use a single AVCaptureSession that captures from the built-in mic for video, and run SFSpeechRecognizer (or the new SpeechAnalyz
Topic: Media Technologies SubTopic: Audio Tags:
Replies
Boosts
Views
Activity
1w
Reply to AVAudioEngine fails to start during FaceTime call (error 2003329396)
I hit a very similar issue while building ambient-voice — a real-time speech-to-text macOS app using SpeechAnalyzer. AVAudioEngine.inputNode.installTap() worked fine with built-in mics but silently failed with Bluetooth devices (the tap callback never fired). The root cause is similar to yours: audio session resource conflicts. Our fix was switching from AVAudioEngine to AVCaptureSession. The captureOutput(_:didOutput:from:) delegate fires reliably regardless of audio device state or competing audio sessions. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step — but it is straightforward. For your FaceTime case specifically, AVCaptureSession with .mixWithOthers category option should let you capture mic input without conflicting with the active call audio session. We documented all the audio pitfalls we hit on macOS 26 in our forum post: https://developer.apple.com/forums/thread/819525 The project is open source: https:
Topic: Media Technologies SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Do watchOS widget reloads in an active workout session count against the daily budget?
https://developer.apple.com/documentation/widgetkit/keeping-a-widget-up-to-date lists a number of exception including The widget’s containing app has an active audio or navigation session. https://developer.apple.com/videos/play/wwdc2021/10048/ mentions: However, there are a few situational exceptions that will make these reloads occur both immediately and budget-free. These are when your container app is foreground to the user or when your app is participating in a user session, like Navigation or Now Playing audio. Does an active workout session in a watchOS app count as your app is participating in a user session, so calls to WidgetCenter.shared.reloadTimelines(ofKind:) are budget-free?
Replies
2
Boosts
0
Views
248
Activity
1w