Hello all... is there a way to close a contour if you have found say two points on each side top "extension"? see image attached. So in end desire a trapezoid type shape. Code example would be very appreciated. thank you :) Think I have it as a CGPath. So a way to edit a CGPath, or close the top from a top left to a top right point?
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a CoreImage pipeline and one of my steps is to rotate my image about the origin (bottom left corner) and then translate it. I'm not seeing the behaviour I'm expecting, and I think my problem is in how I'm combining these two steps.
As an example, I start with an identity transform
(lldb) po transform333
▿ CGAffineTransform
- a : 1.0
- b : 0.0
- c : 0.0
- d : 1.0
- tx : 0.0
- ty : 0.0
I then rotate 1.57 radians (approx. 90 degrees, CCW)
transform333 = transform333.rotated(by: 1.57)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : 0.0
- ty : 0.0
I understand the current contents of the transform.
But then I translate by 10, 10:
(lldb) po transform333.translatedBy(x: 10, y: 10)
- a : 0.0007963267107332633
- b : 0.9999996829318346
- c : -0.9999996829318346
- d : 0.0007963267107332633
- tx : -9.992033562211013
- ty : 10.007960096425679
I was expecting tx and ty to be 10 and 10.
I have noticed that when I reverse the order of these operations, the transform contents look correct. So I'll most likely just perform the steps in what feels to me like the incorrect order.
Is anyone willing/able to point me to an explanation of why the steps I'm performing are giving me these results?
thanks,
mike
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it.
On iOS 16, the following code works fine:
import ImageIO
import UIKit
func getUIImage(from imageData: [UInt8]) -> UIImage? {
let data = Data(imageData)
guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil),
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else {
print("Failed to decode JP2 image!")
return nil
}
return UIImage(cgImage: cgImage)
}
However, on iOS 18, this fails with errors like:
initialize:1415: *** invalid JPEG2000 file ***
makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50]
CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59]
Questions:
Did Apple remove or modify JPEG2000 support in iOS 18?
Is there an official workaround for decoding JPEG2000 on iOS 18?
Should I use Vision/Metal/Core Image instead?
Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage?
Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding?
Additional Info:
The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...).
The image works on iOS 16 but fails on iOS 18.
Tested on iPhone running iOS 18.0 beta.
Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
If I create a bitmap image and then try to get ready to draw into it, like so:
NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nullptr
pixelsWide: 128
pixelsHigh: 128
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSDeviceRGBColorSpace
bitmapFormat: NSBitmapFormatAlphaNonpremultiplied |
NSBitmapFormatThirtyTwoBitBigEndian
bytesPerRow: 4 * 128
bitsPerPixel: 32];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]];
then the log shows this error:
CGBitmapContextCreate: unsupported parameter combination:
RGB
8 bits/component, integer
512 bytes/row
kCGImageAlphaLast
kCGImageByteOrderDefault
kCGImagePixelFormatPacked
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this?
If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion.
I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this.
FB18559786
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text.
However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer.
//
// TVView.swift
// Mistery
//
// Created by Joje on 31/07/25.
//
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import AVFoundation
final class TVView: UIView {
// propriedades animacao texto
private var textAnimationTimer: Timer?
private var fullTextToAnimate: String = ""
private var currentCharIndex: Int = 0
// propriedades video estatica
private var player: AVQueuePlayer?
private var playerLayer: AVPlayerLayer?
private var playerLooper: AVPlayerLooper?
var onNextButtonTap: () -> Void = {}
// MARK: - Subviews
// imagem da TV
private(set) lazy var tvImageView: UIImageView = {
let imageView = UIImageView()
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.image = UIImage(named: "tvFinal")
imageView.contentMode = .scaleAspectFit
return imageView
}()
// texto que passa dentro da TV
private(set) lazy var storyLabel: UILabel = {
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
//label.backgroundColor = .gray
label.textColor = .red
label.font = UIFont(name: "MeltedMonster", size: 30)
label.textAlignment = .left
label.numberOfLines = 0
label.text = ""
return label
}()
private(set) lazy var nextButton: UIButton = {
let button = UIButton(type: .system)
button.translatesAutoresizingMaskIntoConstraints = false
//button.backgroundColor = .darkGray
button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside)
return button
}()
// MARK: - Lifecycle
override init(frame: CGRect) {
super.init(frame: frame)
backgroundColor = .black
setupVideoPlayer()
addSubviews()
setupConstraints()
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18)
setupFisheyeEffect()
}
private func setupFisheyeEffect() {
// cria o filtro
guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")}
storyLabel.layer.shouldRasterize = true
storyLabel.layer.rasterizationScale = UIScreen.main.scale
// define os parametros
filter.setDefaults()
// centro do efeito
let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY)
filter.setValue(center, forKey: kCIInputCenterKey)
// raio de distorção
filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey)
// intensidade de distorção
filter.setValue(7, forKey: kCIInputScaleKey)
storyLabel.layer.filters = [filter]
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - Button actions
@objc private func didPressNextButton() {
onNextButtonTap()
}
@objc private func animateNextCharacter() {
guard currentCharIndex < fullTextToAnimate.count else {
textAnimationTimer?.invalidate()
return
}
let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex)
let partialText = String(fullTextToAnimate[...currentTextIndex])
storyLabel.text = partialText
currentCharIndex += 1
}
public func updateStoryText(with text: String) {
textAnimationTimer?.invalidate()
storyLabel.text = ""
fullTextToAnimate = text
currentCharIndex = 0
textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true)
}
// MARK: - Setup methods
private func setupVideoPlayer() {
guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else {
print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov")
return
}
let playerItem = AVPlayerItem(url: videoURL)
player = AVQueuePlayer(playerItem: playerItem)
// LINHA COM POSSIVEL ERRO
playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = .resizeAspectFill
if let layer = playerLayer {
self.layer.addSublayer(layer)
}
player?.play()
}
private func addSubviews() {
self.addSubview(storyLabel)
self.addSubview(tvImageView)
self.addSubview(nextButton)
}
private func setupConstraints() {
NSLayoutConstraint.activate([
// TV Image
tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor),
tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
tvImageView.widthAnchor.constraint(equalTo: widthAnchor),
// TV Text
storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50),
storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35),
storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42),
//TV Button
nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190),
nextButton.widthAnchor.constraint(equalToConstant: 100),
nextButton.heightAnchor.constraint(equalToConstant: 160)
])
}
}
#Preview{
ViewController()
}
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now.
I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode).
In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers.
I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful).
I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list.
To summarize:
No public access to modern APIs AFAIK.
UI that is hostile to 3rd party screen savers on macOS Sequoia.
Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3.
Hence the question:
Is there any future for screen savers on macOS?
Because if there's none, I won't waste my time trying to update some old screen savers.
I'm experiencing an issue with PDFKit where page.removeAnnotation(annotation) successfully removes the annotation from the page's data structure, but the PDFView no longer updates automatically to reflect the change visually.
Issue Details:
The annotation is removed (verified by checking page.annotations.count)
The PDFView display doesn't refresh to show the removal
This code was working correctly before and suddenly stopped working
No code changes were made on my end
Hi Apple team,
Game Mode was introduced in iOS 18. To activate Game Mode, an app must include specific key-value pairs in its *.plist and be categorized as a "Game" on the App Store.
My app (https://apps.apple.com/us/app/voidlink/id6747717070) works primarily as a self-hosted game streaming (PC->iPhone/iPad) client. Game Mode provides clear benefits in terms of latency and frame rate stability, but it can currently only be activated when running via Xcode or TestFlight.
I am an individual iOS developer based in China, where an additional government license is required for apps to be listed under the "Game" category on the App Store. Obtaining such a license is very difficult for independent developers, so my app has been categorized under "Utilities" instead.(If move the app to game category, it will disappear from Chinese App Store immediately)
Expectation / Suggestion:
Please consider making Game Mode available as a local, user-controllable option on iOS18/26+, such as through a system “App Pool” where users can choose which apps to enable Game Mode for, regardless of App Store category.
This would greatly benefit use cases like streaming clients, benchmarking tools, and remote play utilities, without requiring developers to reclassify their apps as “Games” on App Store.
Topic:
Graphics & Games
SubTopic:
General
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc.
The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application.
We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d
Steps to Reproduce:
Load the above mentioned test example on an iPhone running version 18.2 using safari.
Perform volume rendering using the provided dataset.
Measure the time taken by volume for each rotate or drag action.
Repeat the same steps on an iPhone running version 18.1 for comparison.
Additional Information:
Device Model Tested:
iPhone12, iPhone13, iPhone14
iOS Version With Issue:
18.2
18.3(Beta)
I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know.
Thank you.
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build.
macOS 15.1, Xcode 16.2
Can you provide any solutions to solve this problem?
Also submitted Feedback ID #FB18432749
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working.
Wondering if you could help, when it will be fixed and if others are having the same issue.
Many thanks, William.
Is it possible to start screen recording (through Control Center) without user prompt?
I mean to ask user permission for the first time and after that to start and stop recording programmatically only?
I need to record screen only for specific events.
Hello
XQuartz is an open-source effort to develop a version of the X.Org X Window System (https://www.xquartz.org/), widely used to bring graphical support to applications running in remote servers (usually via SSH).
Since macOS Tahoe, XQuartz fails to refresh properly on window resize (more info here https://github.com/XQuartz/XQuartz/issues/438#issuecomment-3371409500), leading to severe usability issues.
The XQuartz developers are already aware of the issue, but I’m wondering if there’s anything we can do at the OS level to resolve it and restore the usual behavior from before macOS Tahoe.
Thanks,
KiM
Topic:
Graphics & Games
SubTopic:
General
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
After many former OS and Xcode updates, my Game Controller Swift code generates a "DIS-CONNECTED" MESSAGE.
Mac Sequoia 15.2
Xcode 16.2
Tried to update PlayStation controller firmware on my Mac.
Still no luck with Xcode and its use of a game controller with tvOS.
I've been playing with the new GameSave API and cannot get it to work.
I followed the 3-step instructions from the Developer video. Step 2, "Next, login to your Apple developer account and include this entitlement in the provisioning profile for your game." seems to be unnecessary, as Xcode set this for you when you do step 1 "First add the iCloud entitlement to your game."
Running the app on my device and tapping "Load" starts the sync, then fails with the error "Couldn’t communicate with a helper application." I have no idea how to troubleshoot this. Every other time I've used CloudKit it has Just Worked™.
Halp‽
Here is my example app:
import Foundation
import SwiftUI
import GameSave
@main struct GameSaveTestApp: App {
var body: some Scene {
WindowGroup {
GameView()
}
}
}
struct GameView: View {
@State private var loader = GameLoader()
var body: some View {
List {
Button("Load") { loader.load() }
Button("Finish sync") { Task { try? await loader.finish() } }
}
}
}
@Observable class GameLoader {
var directory: GameSaveSyncedDirectory?
func stateChanged() {
let newState = withObservationTracking {
directory?.state
} onChange: {
Task { @MainActor [weak self] in self?.stateChanged() }
}
print("State changed to \(newState?.description ?? "nil")")
switch newState {
case .error(let error):
print("ERROR: \(error.localizedDescription)")
default: _ = 0 // NOOP
}
}
func load() {
print("Opening gamesave directory")
directory = GameSaveSyncedDirectory.openDirectory()
stateChanged()
}
func finish() async throws {
print("finishing syncing")
await directory?.finishSyncing()
}
}
Topic:
Graphics & Games
SubTopic:
General
Dear Apple Color Management Team,
I’m a professional visual creator working on color-critical photo and graphic projects using macOS (currently 26.1 Tahoe).
In recent macOS releases, LUT-based ICC display profiles (such as XYZ LUT + Matrix types generated by DisplayCAL or professional spectrophotometers) can no longer be installed or activated via ColorSync.
This limitation significantly affects professional workflows in photography, graphic design, prepress, and video color grading — fields that rely on precise display profiling.
The current workaround (converting LUT profiles to simple shaper/matrix ICC v2) results in less accurate tone response and color reproduction, particularly in the dark range and wide-gamut displays.
I kindly request Apple to restore or re-enable the ability to install and use ICC v2/v4 LUT-based display profiles under ColorSync, as was possible on macOS Monterey and Ventura.
This would allow professionals to continue using trusted calibration tools such as DisplayCAL, X-Rite i1Profiler, and Calibrite Profiler to achieve accurate color management.
macOS is widely used in professional creative industries, and restoring this feature would be a huge help for countless photographers, designers, and colorists.
Thank you for your attention and commitment to professional users.
Best regards,
Richárd Deutsch
Professional Photographer
https://riccio.hu/
MacBook Pro (M4 Pro, macOS 26.1)
We are developing a hybrid iOS app where Angular content is rendered inside a WKWebView, hosted by a native Swift application.
We use the GameController framework to detect whether an external Bluetooth keyboard is connected to an iPad. The following code is executed when the app enters the foreground and also when requested by the web layer:
func keyboardStatusHandler(){
let isKeyboardConnected = GCKeyboard.coalesced != nil
if(!isKeyboardConnected){
//sent status to Angular
} else {
//sent status to Angular
}
}
Crash details
We are seeing intermittent crashes on iPad with the following stack trace:
Crashed: GCDeviceSession.HID
0 libobjc.A.dylib 0x7db8 objc_retain_x8 + 16
1 libsystem_blocks.dylib 0xfb8 void HelperBase<ExtendedInline>::copyCapture<(HelperBase<ExtendedInline>::BlockCaptureKind)3>(unsigned int) + 48
2 libsystem_blocks.dylib 0xbc4 HelperBase<GenericInline>::copyBlock(Block_layout*, Block_layout*) + 108
3 libsystem_blocks.dylib 0xc94 _call_copy_helpers_excp + 60
4 libsystem_blocks.dylib 0xef8 _Block_copy + 412
5 libdispatch.dylib 0x1a70 _dispatch_Block_copy + 32
6 libdispatch.dylib 0x792c dispatch_async + 56
7 libdispatch.dylib 0x792c dispatch_channel_async + 56
8 GameController 0xea6dc -[GCKeyboardInput _handleKeyboardEvent:] + 324
9 GameController 0x22508 __53-[_GCKeyboardEventHIDAdapter initWithSource:service:]_block_invoke + 376
10 GameController 0x11d30 -[_GCHIDEventSubject publishHIDEvent:] + 268
11 GameController 0xb79cc __40-[_GCHIDEventUIKitClient initWithQueue:]_block_invoke_3 + 44
12 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
13 libdispatch.dylib 0x12088 _dispatch_async_and_wait_invoke_and_complete_recurse + 272
14 libdispatch.dylib 0x8448 _dispatch_async_and_wait_f + 108
15 GameController 0xb7984 __40-[_GCHIDEventUIKitClient initWithQueue:]_block_invoke_2 + 132
16 GameController 0xb746c __48-[__GCHIDEventUIKitClient _initWithApplication:]_block_invoke + 256
17 UIKitCore 0x11fd394 __61-[UIEventFetcher _setHIDGameControllerEventObserver:onQueue:]_block_invoke_3 + 40
18 libdispatch.dylib 0x1aac _dispatch_call_block_and_release + 32
19 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
20 libdispatch.dylib 0xa2d0 _dispatch_lane_serial_drain + 740
21 libdispatch.dylib 0xadac _dispatch_lane_invoke + 388
22 libdispatch.dylib 0x151dc _dispatch_root_queue_drain_deferred_wlh + 292
23 libdispatch.dylib 0x14a60 _dispatch_workloop_worker_thread + 540
24 libsystem_pthread.dylib 0xa0c _pthread_wqthread + 292
25 libsystem_pthread.dylib 0xaac start_wqthread + 8
Observed scenarios
Crash occurs when the app transitions from background to foreground
Crash also occurs when the Angular layer requests keyboard status, triggering the same code path
Questions
Has anyone encountered crashes related to GCKeyboard.coalesced or GCKeyboardInput like this?
Are there known issues with the GameController framework when querying keyboard state during app lifecycle transitions?
Is there a recommended or safer way to detect external keyboard connection status on iPad (especially when using WKWebView)?
Any insights, known platform issues, or suggested workarounds would be greatly appreciated.
Thanks!
Hi fellow devs, I have a quick question is it possible to have virtual controllers on Mac. For instance can my app exclusively manage the controller and output it into the Game Controller framework? And create a virtual controller to allow for features such as controller emulation, haptic control, and others.