Search results for

“Popping Sound”

20,034 results found

Post

Replies

Boosts

Views

Activity

Reply to How to enter Picture-in-Picture on background from inline playback in WKWebView
[quote='819235021, jimmykuo, /thread/819235, /profile/jimmykuo'] Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? [/quote] Without user interaction, such as a button press, transitioning to PiP automatically would require AVPlayerViewController to be implemented in the inline video. The issue here is playsinline does not hand the video off to AVPlayerViewController. Once the inline video uses AVPlayerViewController, you can implement automatic switching to PiP by enabling canStartPictureInPictureAutomaticallyFromInline. This is mentioned in Adopting Picture in Picture in a Standard Player As for your testing: visibilitychange is not considered user interaction. Only begin PiP playback in response to user interaction and never programmatically. For example, webkitSetPresentationMode will trigger PiP if it is set as the action of a button. In the situations where the video do
Topic: Safari & Web SubTopic: General Tags:
1w
Reply to NEED HELP WITH VOICE MEMOS PLEASE
The Apple Developer Forums are for questions about APIs and features intended specifically for developers. Since it sounds like you went to the Apple Support Community already and were sent here because you are running a beta, then you should file a bug report in Feedback Assistant.
1w
NEED HELP WITH VOICE MEMOS PLEASE
I have a very important voice memo that I have recorded on my iPad, as I was recording it, it seemed to have completely worked, but for some strange reason the voice memo will not play, download to my filess, I am unable to send it to anyone, and it basically just pops up blank every time I just try to share it. I cannot listen to it and have not been able to listen to it. This is the same on both my iPad and my phone, when I went to Apple, they said to come on here and ask for guidance, and that it might be because my regular iPhone is updated normally but on my iPad, which was the Voice Memo was recorded on is updated To beta. please give me some advice and if there’s any way, I could recover even the transcript of the voice message because it is truly so important to me. The sound waves are there and everything and I just don’t understand why it will not play on either device. I have also tried copying it. I have also tried trimming the beginning and it’s like the data is there, but it will not pl
1
0
73
1w
Reply to Code Signing "Invalid", No Reason Given
[quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] I've had issues getting the dmg signed by the Apple notary service [/quote] That text suggests that you’ve misunderstood how notary works. The notary service doesn’t sign your product. Rather, you present it with a distribution-ready product, one that’s already signed, and the notary service checks it and, if all is well, issues a signed ticket. See Notarisation Fundamentals for more about how this process works. As to why the notary service is refusing to notarise your product, it’s hard to say without more info. It sounds like you were able to submit the product and get a response, but the status is Invalid, indicating a problem with your submission. In that case the next step is to look at the notary log. What does it say? See Fetching the Notary Log for info on how to get the log. [quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] The only big change we made this time was switching to Maven [/quote] Ah, Jav
Topic: Code Signing SubTopic: Notarization Tags:
1w
Reply to Xcode 26.4: Regressions in Intelligence features
Hi, I have also not determined a pattern. It's often if I pause for a while. Guaranteed to need a long overnight. But also happened during a session. I agree about the thought bubbles. They are almost pointless if you can't read them. Claude has got slower more recently, may be due to the influx of users due to political affairs. Re: Codex & Claude showing: Up until 26.4 beta 3 and before you selected your agent to use and then from then on you just hit new session/chat. In 26.4 rc there is no choice of active agent. All you do is click new session/chat. When you tap the button a pop over appears requesting the agent you wish to use. It's a very jarring workflow... Hopefully this isn't coming across as too negative. The agentic development flow is amazing, it just needs the warts removed :) I can't reply inline because it limits the characters. Maybe that needs feeding back to the website team. It feels like we don't want people to engage when the replies are artificially curtailed
1w
Reply to Help with visionOS pushWindow issues requested
Hey @drewolbrich, Thank you for filing all of these reports! Having each issue tracked separately is really helpful for our investigations. In terms of workarounds, your suggestions sound reasonable, but I don't have specific workarounds to recommend at this time. If you find anything else that helps you avoid the issue, please share it with the community here. For others encountering similar issues: Even though we're aware of this issue, we still encourage you to open a bug report, and post the FB number here once you do. The specific info you include in your bug report might help our investigation, and filing the bug report allows you to get notified when it is resolved. Bug Reporting: How and Why? Thanks, Michael
Topic: Spatial Computing SubTopic: General Tags:
1w
Fatal error on rollback after delete
I encountered an error when trying to rollback context after deleting some model with multiple one-to-many relationships when encountered a problem later in a deleting method and before saving the changes. Something like this: do { // Fetch model modelContext.delete(model) // Do some async work that potentially throws try modelContext.save() } catch { modelContext.rollback() } When relationship is empty - the parent has no children - I can safely delete and rollback with no issues. However, when there is even one child when I call even this code: modelContext.delete(someModel) modelContext.rollback() I'm getting a fatal error: SwiftData/ModelSnapshot.swift:46: Fatal error: Unexpected backing data for snapshot creation: SwiftData._FullFutureBackingData I use ModelContext from within the ModelActor but using mainContext changes nothing. My ModelContainer is quite simple and problem occurs on both in-memory and persistent storage, with or without CloudKit database being enabled. I can isolate the issue in test e
2
0
98
1w
Reply to Xcode 26.4: Regressions in Intelligence features
First, thank you for taking the time to post these. We really love hearing from our developers because this helps us make the tools better. Re: OAuth — hmm, OK we're investigating. Is there any pattern to this? Do you see this after a certain amount of time? Re: Thinking — interesting, this is good feedback about how you're using thinking as progress tracking. We know about the issue where you can't open the popover until the thinking is done. But, sounds like you really want a don't put thinking in a bubble entirely because that's part of your tracking of the agent progress. Makes sense. It'd be great if you could file a feedback request specifically for this. Re: Slowness to start — yeah, I agree that sounds weird. The only way we have to debug this at the moment is if you attach the contents of your conversation via the Bug button at the bottom of the transcript. Re: Codex & Claude showing — Have you downloaded both agents using the Intelligence settings even if you haven't logged in?
1w
MPS SDPA Attention Kernel Regression on A14-class (M1) in macOS 26.3.1 — Works on A15+ (M2+)
Summary Since macOS 26, our Core ML / MPS inference pipeline produces incorrect results on Mac mini M1 (Macmini9,1, A14-class SoC). The same model and code runs correctly on M2 and newer (A15-class and up). The regression appears to be in the Scaled Dot-Product Attention (SDPA) kernel path in the MPS backend. Environment Affected Mac mini M1 — Macmini9,1 (A14-class) Not affected M2 and newer (A15-class and up) Last known good macOS Sequoia First broken macOS 26 (Tahoe) ? Confirmed broken on macOS 26.3.1 Framework Core ML + MPS backend Language C++ (via CoreML C++ API) Description We ship an audio processing application (VoiceAssist by NoiseWorks) that runs a deep learning model (based on Demucs architecture) via Core ML with the MPS compute unit. On macOS Sequoia this works correctly on all Apple Silicon Macs including M1. After updating to macOS 26 (Tahoe), inference on M1 Macs fails — either producing garbage output or crashing. The same binary, same .mlpackage, same inputs work correctly on M2+. O
1
0
188
1w
Reply to ScreenCaptureKit recording output is corrupted when captureMicrophone is true
When captureMicrophone is true, ScreenCaptureKit delivers separate audio sample buffers for app audio and microphone audio through the same stream output delegate. The key detail is that these arrive with different CMFormatDescriptions. A few things to check in your CaptureEngine: Make sure you are distinguishing between the two audio stream types in your stream(_:didOutputSampleBuffer:of:) callback. The type parameter will be .audio for app audio and .microphone for mic audio — these need separate AVAssetWriterInput instances with matching format descriptions. If you are writing both to a single AVAssetWriterInput, the interleaved samples with different sample rates or channel counts will corrupt the container. App audio typically comes at the system sample rate (e.g. 48kHz stereo) while microphone audio may arrive at a different rate depending on the input device. Verify the timing: microphone and app audio timestamps
Topic: Graphics & Games SubTopic: General Tags:
1w
Reply to Video Audio + Speech To Text
This is actually possible, though it requires a different approach than the typical single-AVAudioEngine setup. The key insight is that iOS allows multiple AVCaptureSession instances to coexist under certain conditions. You can configure two separate audio routes: Use AVCaptureSession with the AirPods as the input device for your speech recognition pipeline. Set the audio session category to .playAndRecord with .allowBluetooth option. For video recording with the built-in mic, use a second AVCaptureSession (or the camera API you are already using). The built-in mic can be explicitly selected as the audio input for this session. The catch is you need to manage the audio session category carefully. The .mixWithOthers option is essential here — without it, one session will interrupt the other. Another approach that avoids the dual-session complexity: use a single AVCaptureSession that captures from the built-in mic for video, and run SFSpeechRecognizer (or the new SpeechAnalyz
Topic: Media Technologies SubTopic: Audio Tags:
1w
Reply to AVAudioEngine fails to start during FaceTime call (error 2003329396)
I hit a very similar issue while building ambient-voice — a real-time speech-to-text macOS app using SpeechAnalyzer. AVAudioEngine.inputNode.installTap() worked fine with built-in mics but silently failed with Bluetooth devices (the tap callback never fired). The root cause is similar to yours: audio session resource conflicts. Our fix was switching from AVAudioEngine to AVCaptureSession. The captureOutput(_:didOutput:from:) delegate fires reliably regardless of audio device state or competing audio sessions. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step — but it is straightforward. For your FaceTime case specifically, AVCaptureSession with .mixWithOthers category option should let you capture mic input without conflicting with the active call audio session. We documented all the audio pitfalls we hit on macOS 26 in our forum post: https://developer.apple.com/forums/thread/819525 The project is open source: https:
Topic: Media Technologies SubTopic: General Tags:
1w
Reply to CloudKit, cannot deploy private database initial schema to production
When running the app in the development environment, I can see that data is saved and can be retrieved successfully. However, in the iCloud console, I don’t see any record types or even the custom zone. Hmm, this doesn't sound right. Would you mind to share the detailed steps you used to reproduce the issue? It will be really strange if you see the data but not the schema in CloudKit Console. Be sure that you choose the right container and the development environment. Additionally, I’m unable to deploy any schema to production because no changes are detected. CloudKit Console doesn't deploy the schema, if the schema in the development environment is indeed empty. Installing the app from TestFlight when trying to upload a record CloudKit reports this error: The error indicates that MyType didn't exist and the attempt to create it failed, which is as-designed because a TestFlight app by default uses the CloudKit production environment and creating a new record type isn't allowed in the environment. Be
1w
Reply to Do watchOS widget reloads in an active workout session count against the daily budget?
Thanks for the post, I’m not an expert in watchOS, but I’m relative familiar with in Live Activity and Widgets. However, waiting for a confirmation from a watchOS engineer on this. On watchOS, starting an HKWorkoutSession elevates your app's lifecycle state. The system considers your app to be actively in use by the user equivalent to being in the foreground, or having an active audio/navigation session. Looking at the documentation and trying to make sense of it, because the system recognizes the user is actively engaged in the workout, WidgetKit suspends the standard daily reload budget to allowed unlimited complication updates during an active workout. The budget exception only applies while the HKWorkoutSession is in the .running state. If the workout is paused, ended, or suspended, your app loses this elevated privilege and any subsequent calls to reloadTimelines will immediately start counting against your standard daily budget. Again, inviting watchOS experts here to jump in the thread to veri
1w
Unable to capture only the cursor in macOS Tahoe
Precondition: In system settings, scale the pointer size up to the max. Our SCScreenshotManager code currently works in macOS 15 and earlier to capture the cursor at it's larger size, but broke in one of the minor releases of macOS Tahoe. The error it produces now is Failed to start stream due to audio/video capture failure. This only seems to happen with the cursor window, not any others. Another way to get the cursor is with https://developer.apple.com/documentation/appkit/nscursor/currentsystem, but that is now deprecated, which makes me think the capture of the cursor is being blocked deliberately. We see this as a critical loss of functionality for our apps, and could use guidance on what to use instead.
1
0
345
1w
Reply to How to enter Picture-in-Picture on background from inline playback in WKWebView
[quote='819235021, jimmykuo, /thread/819235, /profile/jimmykuo'] Is there any way to programmatically enter PIP from inline playback when a WKWebView app goes to background? Or is this intentionally restricted by WebKit to fullscreen-only transitions? [/quote] Without user interaction, such as a button press, transitioning to PiP automatically would require AVPlayerViewController to be implemented in the inline video. The issue here is playsinline does not hand the video off to AVPlayerViewController. Once the inline video uses AVPlayerViewController, you can implement automatic switching to PiP by enabling canStartPictureInPictureAutomaticallyFromInline. This is mentioned in Adopting Picture in Picture in a Standard Player As for your testing: visibilitychange is not considered user interaction. Only begin PiP playback in response to user interaction and never programmatically. For example, webkitSetPresentationMode will trigger PiP if it is set as the action of a button. In the situations where the video do
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Reply to NEED HELP WITH VOICE MEMOS PLEASE
The Apple Developer Forums are for questions about APIs and features intended specifically for developers. Since it sounds like you went to the Apple Support Community already and were sent here because you are running a beta, then you should file a bug report in Feedback Assistant.
Replies
Boosts
Views
Activity
1w
NEED HELP WITH VOICE MEMOS PLEASE
I have a very important voice memo that I have recorded on my iPad, as I was recording it, it seemed to have completely worked, but for some strange reason the voice memo will not play, download to my filess, I am unable to send it to anyone, and it basically just pops up blank every time I just try to share it. I cannot listen to it and have not been able to listen to it. This is the same on both my iPad and my phone, when I went to Apple, they said to come on here and ask for guidance, and that it might be because my regular iPhone is updated normally but on my iPad, which was the Voice Memo was recorded on is updated To beta. please give me some advice and if there’s any way, I could recover even the transcript of the voice message because it is truly so important to me. The sound waves are there and everything and I just don’t understand why it will not play on either device. I have also tried copying it. I have also tried trimming the beginning and it’s like the data is there, but it will not pl
Replies
1
Boosts
0
Views
73
Activity
1w
Reply to Code Signing "Invalid", No Reason Given
[quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] I've had issues getting the dmg signed by the Apple notary service [/quote] That text suggests that you’ve misunderstood how notary works. The notary service doesn’t sign your product. Rather, you present it with a distribution-ready product, one that’s already signed, and the notary service checks it and, if all is well, issues a signed ticket. See Notarisation Fundamentals for more about how this process works. As to why the notary service is refusing to notarise your product, it’s hard to say without more info. It sounds like you were able to submit the product and get a response, but the status is Invalid, indicating a problem with your submission. In that case the next step is to look at the notary log. What does it say? See Fetching the Notary Log for info on how to get the log. [quote='820155021, alex_strong, /thread/820155, /profile/alex_strong'] The only big change we made this time was switching to Maven [/quote] Ah, Jav
Topic: Code Signing SubTopic: Notarization Tags:
Replies
Boosts
Views
Activity
1w
Reply to Xcode 26.4: Regressions in Intelligence features
Hi, I have also not determined a pattern. It's often if I pause for a while. Guaranteed to need a long overnight. But also happened during a session. I agree about the thought bubbles. They are almost pointless if you can't read them. Claude has got slower more recently, may be due to the influx of users due to political affairs. Re: Codex & Claude showing: Up until 26.4 beta 3 and before you selected your agent to use and then from then on you just hit new session/chat. In 26.4 rc there is no choice of active agent. All you do is click new session/chat. When you tap the button a pop over appears requesting the agent you wish to use. It's a very jarring workflow... Hopefully this isn't coming across as too negative. The agentic development flow is amazing, it just needs the warts removed :) I can't reply inline because it limits the characters. Maybe that needs feeding back to the website team. It feels like we don't want people to engage when the replies are artificially curtailed
Replies
Boosts
Views
Activity
1w
Reply to Help with visionOS pushWindow issues requested
Hey @drewolbrich, Thank you for filing all of these reports! Having each issue tracked separately is really helpful for our investigations. In terms of workarounds, your suggestions sound reasonable, but I don't have specific workarounds to recommend at this time. If you find anything else that helps you avoid the issue, please share it with the community here. For others encountering similar issues: Even though we're aware of this issue, we still encourage you to open a bug report, and post the FB number here once you do. The specific info you include in your bug report might help our investigation, and filing the bug report allows you to get notified when it is resolved. Bug Reporting: How and Why? Thanks, Michael
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Fatal error on rollback after delete
I encountered an error when trying to rollback context after deleting some model with multiple one-to-many relationships when encountered a problem later in a deleting method and before saving the changes. Something like this: do { // Fetch model modelContext.delete(model) // Do some async work that potentially throws try modelContext.save() } catch { modelContext.rollback() } When relationship is empty - the parent has no children - I can safely delete and rollback with no issues. However, when there is even one child when I call even this code: modelContext.delete(someModel) modelContext.rollback() I'm getting a fatal error: SwiftData/ModelSnapshot.swift:46: Fatal error: Unexpected backing data for snapshot creation: SwiftData._FullFutureBackingData I use ModelContext from within the ModelActor but using mainContext changes nothing. My ModelContainer is quite simple and problem occurs on both in-memory and persistent storage, with or without CloudKit database being enabled. I can isolate the issue in test e
Replies
2
Boosts
0
Views
98
Activity
1w
Reply to Xcode 26.4: Regressions in Intelligence features
First, thank you for taking the time to post these. We really love hearing from our developers because this helps us make the tools better. Re: OAuth — hmm, OK we're investigating. Is there any pattern to this? Do you see this after a certain amount of time? Re: Thinking — interesting, this is good feedback about how you're using thinking as progress tracking. We know about the issue where you can't open the popover until the thinking is done. But, sounds like you really want a don't put thinking in a bubble entirely because that's part of your tracking of the agent progress. Makes sense. It'd be great if you could file a feedback request specifically for this. Re: Slowness to start — yeah, I agree that sounds weird. The only way we have to debug this at the moment is if you attach the contents of your conversation via the Bug button at the bottom of the transcript. Re: Codex & Claude showing — Have you downloaded both agents using the Intelligence settings even if you haven't logged in?
Replies
Boosts
Views
Activity
1w
MPS SDPA Attention Kernel Regression on A14-class (M1) in macOS 26.3.1 — Works on A15+ (M2+)
Summary Since macOS 26, our Core ML / MPS inference pipeline produces incorrect results on Mac mini M1 (Macmini9,1, A14-class SoC). The same model and code runs correctly on M2 and newer (A15-class and up). The regression appears to be in the Scaled Dot-Product Attention (SDPA) kernel path in the MPS backend. Environment Affected Mac mini M1 — Macmini9,1 (A14-class) Not affected M2 and newer (A15-class and up) Last known good macOS Sequoia First broken macOS 26 (Tahoe) ? Confirmed broken on macOS 26.3.1 Framework Core ML + MPS backend Language C++ (via CoreML C++ API) Description We ship an audio processing application (VoiceAssist by NoiseWorks) that runs a deep learning model (based on Demucs architecture) via Core ML with the MPS compute unit. On macOS Sequoia this works correctly on all Apple Silicon Macs including M1. After updating to macOS 26 (Tahoe), inference on M1 Macs fails — either producing garbage output or crashing. The same binary, same .mlpackage, same inputs work correctly on M2+. O
Replies
1
Boosts
0
Views
188
Activity
1w
Reply to ScreenCaptureKit recording output is corrupted when captureMicrophone is true
When captureMicrophone is true, ScreenCaptureKit delivers separate audio sample buffers for app audio and microphone audio through the same stream output delegate. The key detail is that these arrive with different CMFormatDescriptions. A few things to check in your CaptureEngine: Make sure you are distinguishing between the two audio stream types in your stream(_:didOutputSampleBuffer:of:) callback. The type parameter will be .audio for app audio and .microphone for mic audio — these need separate AVAssetWriterInput instances with matching format descriptions. If you are writing both to a single AVAssetWriterInput, the interleaved samples with different sample rates or channel counts will corrupt the container. App audio typically comes at the system sample rate (e.g. 48kHz stereo) while microphone audio may arrive at a different rate depending on the input device. Verify the timing: microphone and app audio timestamps
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Reply to Video Audio + Speech To Text
This is actually possible, though it requires a different approach than the typical single-AVAudioEngine setup. The key insight is that iOS allows multiple AVCaptureSession instances to coexist under certain conditions. You can configure two separate audio routes: Use AVCaptureSession with the AirPods as the input device for your speech recognition pipeline. Set the audio session category to .playAndRecord with .allowBluetooth option. For video recording with the built-in mic, use a second AVCaptureSession (or the camera API you are already using). The built-in mic can be explicitly selected as the audio input for this session. The catch is you need to manage the audio session category carefully. The .mixWithOthers option is essential here — without it, one session will interrupt the other. Another approach that avoids the dual-session complexity: use a single AVCaptureSession that captures from the built-in mic for video, and run SFSpeechRecognizer (or the new SpeechAnalyz
Topic: Media Technologies SubTopic: Audio Tags:
Replies
Boosts
Views
Activity
1w
Reply to AVAudioEngine fails to start during FaceTime call (error 2003329396)
I hit a very similar issue while building ambient-voice — a real-time speech-to-text macOS app using SpeechAnalyzer. AVAudioEngine.inputNode.installTap() worked fine with built-in mics but silently failed with Bluetooth devices (the tap callback never fired). The root cause is similar to yours: audio session resource conflicts. Our fix was switching from AVAudioEngine to AVCaptureSession. The captureOutput(_:didOutput:from:) delegate fires reliably regardless of audio device state or competing audio sessions. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step — but it is straightforward. For your FaceTime case specifically, AVCaptureSession with .mixWithOthers category option should let you capture mic input without conflicting with the active call audio session. We documented all the audio pitfalls we hit on macOS 26 in our forum post: https://developer.apple.com/forums/thread/819525 The project is open source: https:
Topic: Media Technologies SubTopic: General Tags:
Replies
Boosts
Views
Activity
1w
Reply to CloudKit, cannot deploy private database initial schema to production
When running the app in the development environment, I can see that data is saved and can be retrieved successfully. However, in the iCloud console, I don’t see any record types or even the custom zone. Hmm, this doesn't sound right. Would you mind to share the detailed steps you used to reproduce the issue? It will be really strange if you see the data but not the schema in CloudKit Console. Be sure that you choose the right container and the development environment. Additionally, I’m unable to deploy any schema to production because no changes are detected. CloudKit Console doesn't deploy the schema, if the schema in the development environment is indeed empty. Installing the app from TestFlight when trying to upload a record CloudKit reports this error: The error indicates that MyType didn't exist and the attempt to create it failed, which is as-designed because a TestFlight app by default uses the CloudKit production environment and creating a new record type isn't allowed in the environment. Be
Replies
Boosts
Views
Activity
1w
Reply to Do watchOS widget reloads in an active workout session count against the daily budget?
Thanks for the post, I’m not an expert in watchOS, but I’m relative familiar with in Live Activity and Widgets. However, waiting for a confirmation from a watchOS engineer on this. On watchOS, starting an HKWorkoutSession elevates your app's lifecycle state. The system considers your app to be actively in use by the user equivalent to being in the foreground, or having an active audio/navigation session. Looking at the documentation and trying to make sense of it, because the system recognizes the user is actively engaged in the workout, WidgetKit suspends the standard daily reload budget to allowed unlimited complication updates during an active workout. The budget exception only applies while the HKWorkoutSession is in the .running state. If the workout is paused, ended, or suspended, your app loses this elevated privilege and any subsequent calls to reloadTimelines will immediately start counting against your standard daily budget. Again, inviting watchOS experts here to jump in the thread to veri
Replies
Boosts
Views
Activity
1w
Unable to capture only the cursor in macOS Tahoe
Precondition: In system settings, scale the pointer size up to the max. Our SCScreenshotManager code currently works in macOS 15 and earlier to capture the cursor at it's larger size, but broke in one of the minor releases of macOS Tahoe. The error it produces now is Failed to start stream due to audio/video capture failure. This only seems to happen with the cursor window, not any others. Another way to get the cursor is with https://developer.apple.com/documentation/appkit/nscursor/currentsystem, but that is now deprecated, which makes me think the capture of the cursor is being blocked deliberately. We see this as a critical loss of functionality for our apps, and could use guidance on what to use instead.
Replies
1
Boosts
0
Views
345
Activity
1w