When call:
[UITabBarController setViewControllers:animated:]
It crashed and raise an Fatal Exception:
Fatal Exception: NSInternalInconsistencyException Attempting to select a view controller that isn't a child! (null)
the crash stack is:
Fatal Exception: NSInternalInconsistencyException
0 CoreFoundation 0x8408c __exceptionPreprocess
1 libobjc.A.dylib 0x172e4 objc_exception_throw
2 Foundation 0x82215c _userInfoForFileAndLine
3 UIKitCore 0x38a468 -[UITabBarController transitionFromViewController:toViewController:transition:shouldSetSelected:]
4 UIKitCore 0x3fa8a4 -[UITabBarController _setSelectedViewController:performUpdates:]
5 UIKitCore 0x3fa710 -[UITabBarController setSelectedIndex:]
6 UIKitCore 0x8a5fc +[UIView(Animation) performWithoutAnimation:]
7 UIKitCore 0x3e54e0 -[UITabBarController _setViewControllers:animated:]
8 UIKitCore 0x45d7a0 -[UITabBarController setViewControllers:animated:]
And it appear sometimes, what's the root cause?
Overview
Post
Replies
Boosts
Views
Activity
Hello Apple Developer Team,
I am facing an issue with remote notifications in my iOS app. When the app is in a terminated (kill) state, notifications are successfully received by the device, but none of the app's handlers (like _firebaseMessagingBackgroundHandler in Flutter) are invoked. This is impacting our ability to process silent notifications or perform background tasks reliably when the app is not running.
Steps to reproduce:
Send a remote notification with content-available: 1 in the payload.
Confirm the notification is received by the device while the app is in kill mode.
Observe that no background or foreground notification methods are triggered in the app.
Expected Behavior: The app should invoke the background handler to process the notification payload, even in a terminated state.
Observed Behavior: The notification is delivered to the device, but no app-level processing occurs because none of the methods are triggered.
Can you please confirm if this is the intended behavior due to iOS limitations, or if there is a configuration or alternative solution to allow background handlers to execute in such scenarios? Any guidance or clarification would be highly appreciated.
Thank you!
Iam trying to notarize with notarytool command with app-specific password.
xcrun notarytool submit <Path> --apple-id <APPLE_ID> --password <APP_SPECIFIC_PASSWORD> --team-id <Team-ID>
But it fails with error Error: HTTP status code: 401. Unable to authenticate. Invalid session. Ensure that all authentication arguments are correct.
Tried generating new app-specific password, still failing.
Tried storing password in keychain with store-credentials option, again failing.
--verbose option with store-credentials showing below error
This process stores your credentials securely in the Keychain. You reference these credentials later using a profile name.
Validating your credentials...
[06:05:28.854Z] Info [API] Initialized Notary API with base URL: https://appstoreconnect.apple.com/notary/v2/\
[06:05:28.854Z] Info [API] Preparing GET request to URL: https://appstoreconnect.apple.com/notary/v2/test?, Parameters: [:], Custom Headers: private<Dictionary<String, String>>
[06:05:28.855Z] Debug [AUTHENTICATION] Delaying current request to refresh app-specific password token.
[06:05:28.855Z] Info [API] Preparing GET request to URL: https://appstoreconnect.apple.com/notary/v2/asp?, Parameters: [:], Custom Headers: private<Dictionary<String, String>>
[06:05:28.855Z] Debug [AUTHENTICATION] Authenticating request to '/notary/v2/asp' with Basic Auth. Username: , Password: private, Team ID:
[06:05:28.856Z] Debug [TASKMANAGER] Starting Task Manager loop to wait for asynchronous HTTP calls.
[06:05:30.194Z] Debug [API] Received response status code: 401, message: unauthorized, URL: https://appstoreconnect.apple.com/notary/v2/asp?, Correlation Key:
[06:05:30.195Z] Error [TASKMANAGER] Completed Task with ID 2 has encountered an error.
[06:05:30.195Z] Debug [TASKMANAGER]Ending Task Manager loop.
Error: HTTP status code: 401. Unable to authenticate. Invalid session. Ensure that all authentication arguments are correct.
Info lebih lanjut dapat menghubungi tim support melalui layanan live chat di aplikasi Bibit atau via Whatsapp di nomor +62878-4457-9642 atau klik tombol di bawah untuk ditangani lebih lanjut.
Ini adalah nomor WhatsApp Indodak 087844579642
Kamu dapat menggunakan layanan Deposit USD Langsung tanpa kehilangan nilai uang karena selisih kurs valas minimum US$10.000 dengan menghubungi Priority Service Line di nomor (021) 8063 0065 atau 0878-4457-9642 (hanya untuk chat whatsapp)
Hi,
How to customize tables in SwiftUI its color background for example, the background modifier doesn't work ? how to change separator lines ? rows background colors ? give header row different colors to its text and background color ?
Kind Regards
I have followed https://apple.github.io/coremltools/docs-guides/source/installing-coremltools.html but failed.
Looks like the doc is too outdated.
I have a CALayer and I'd like to animate a property on it. But, the property that triggers the animation change is different to the one that is being changed. A basic example of what I'm trying to do is below. I'm trying to create an animation on count by changing triggerProperty. This example is simplified (in my project, the triggerProperty is not an Int, but a more complex non-animatable type. So, I'm trying to animate it by creating animations for some of it's properties that can be matched to CABasicAnimation - and rendering a version of that class based on the interpolated values).
@objc
class AnimatableLayer: CALayer {
@NSManaged var triggerProperty: Int
@NSManaged var count: Int
override init() {
super.init()
triggerProperty = 1
setNeedsDisplay()
}
override init(layer: Any) {
super.init(layer: layer)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override class func needsDisplay(forKey key: String) -> Bool {
return key == String(keypath: \AnimatableLayer.triggerProperty) || super.needsDisplay(forKey: key)
}
override func action(forKey event: String) -> (any CAAction)? {
if event == String(keypath: \AnimatableLayer.triggerProperty) {
if let presentation = self.presentation() {
let keyPath = String(keypath: \AnimatableLayer.count)
let animation = CABasicAnimation(keyPath: keyPath)
animation.duration = 2.0
animation.timingFunction = CAMediaTimingFunction(name: CAMediaTimingFunctionName.linear)
animation.fromValue = presentation.count
animation.toValue = 10
return animation
}
}
return super.action(forKey: event)
}
override func draw(in ctx: CGContext) {
print("draw")
NSGraphicsContext.saveGraphicsState()
let nsctx = NSGraphicsContext(cgContext: ctx, flipped: true) // create NSGraphicsContext
NSGraphicsContext.current = nsctx // set current context
let renderText = NSAttributedString(string: "\(self.presentation()?.count ?? self.count)", attributes: [.font: NSFont.systemFont(ofSize: 30)])
renderText.draw(in: bounds)
NSGraphicsContext.restoreGraphicsState()
}
func animate() {
print("animate")
self.triggerProperty = 10
}
}
With this code, the animation isn't triggered. It seems to get triggered only if the animation's keypath matches the one on the event (in the action func).
Is it possible to do something like this?
Our app involves using the camera to scan barcodes or QR codes, with a working distance of about 5 cm. However, we’ve noticed variations in the focus distance of camera lenses across different iPhone models.
Currently, we mainly use two types of lenses: wide-angle and ultra-wide-angle.
• For iPhone 13 and earlier models, we use the wide-angle lens.
• For iPhone 13 Pro and later models, we use the ultra-wide-angle lens.
We are not certain if this setup is correct since we don’t have all iPhone models to test.
There is a users have reported focus issues on his iPhone 15.
We would like to ask if there’s a resource where we can find the minimum focus distance of different cameras in each iPhone model. This is to verify whether our current configuration is accurate.
Alternatively, if such data is not readily available, could apple tam advise which camera should be used on various iPhone models for scenarios with a working distance of approximately 5 cm?
Thank you!
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution.
I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less.
When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one.
I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth
My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one.
How can I solve this problem.
Why did I receive a CONSUMPTION_REQUEST for the same transaction_id the day after I already received a REFUND for it?
I received a CONSUMPTION_REQUEST, but due to a program error, I failed to submit the customer information. Later, I was notified with a REFUND, indicating that the transaction had been refunded. However, the next day, I received another CONSUMPTION_REQUEST notification for the same transaction_id.
Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
I have a SwiftUI based program that has compiled and run consistently on previous macos versions. After upgrading to 15.2 beta 4 to address a known issue with TabView in 15.1.1, my app is now entering a severe hang and crashing with:
"The window has been marked as needing another Update Contraints in Window pass, but it has already had more Update Constraints in Window passes than there are views in the window. .<SwiftUI.AppKitWindow: 0x11d82a800> 0x87 (2071) {{44,0},{1468,883}} en"
Is there a known bug that could be causing this crash or known change in the underlying layout model?
Hello, I am currently implementing a biometric authentication registration flow using WebAuthn. I am using ASAuthorizationPlatformPublicKeyCredentialRegistrationRequest, and I would like to know if there is a way to hide the "Save to another device" option that appears during the registration process.
Specifically, I want to guide users to save the passkey only locally on their device, without prompting them to save it to iCloud Keychain or another device.
If there is a way to hide this option or if there is a recommended approach to achieve this, I would greatly appreciate your guidance.
Also, if this is not possible due to iOS version or API limitations, I would be grateful if you could share any best practices for limiting user options in this scenario.
If anyone has experienced a similar issue, your advice would be very helpful. Thank you in advance.
Hello, I am currently working on implementing credential registration for biometric authentication using WebAuthn in an iOS app. I am using ASAuthorizationPlatformPublicKeyCredentialProvider to create a credential registration request based on the data retrieved from the WebAuthn options endpoint.
At the moment, I am only using user.id, user.name, and challenge from the options response, and I am unsure how to utilize the other fields effectively. I would greatly appreciate advice on how to use the following fields:
**Fields I would like to use:
**
rp (Relying Party)
I am retrieving id and name, but I am not sure how best to pass and utilize these fields. Is there an explicit way to use them?
authenticatorSelection
How can I set requireResidentKey and userVerification in ASAuthorizationPlatformPublicKeyCredentialRegistrationRequest? Also, what are the specific benefits of using these fields?
timeout
Is there a way to reflect the timeout value in the credential registration request, and what would be the best way to handle this information in iOS?
attestation
The attestation field can contain values such as none or direct. How should I reflect this in the credential registration request for iOS? I would appreciate a sample implementation or guidance on the benefits of setting this field.
extensions
If I want to customize the authentication flow using the extensions field, how can I appropriately reflect this in iOS? For instance, how can I utilize extensions like credProps?
pubKeyCredParams
Regarding pubKeyCredParams, which is a list of supported public key algorithms, I am unsure how to use it to select an appropriate algorithm in iOS. How should I incorporate this information into the request?
excludeCredentials
I understand that setting excludeCredentials can prevent duplicate registration, but I am not sure how to use past credential information to set it effectively. Any advice on this would be appreciated.
**Current Code
**
Currently, I have implemented the following code, but I am struggling to understand how to add and configure the fields mentioned above.
let publicKeyCredentialProvider = ASAuthorizationPlatformPublicKeyCredentialProvider(
relyingPartyIdentifier: "www.example.com"
)
let registrationRequest = publicKeyCredentialProvider.createCredentialRegistrationRequest(
challenge: challenge,
name: userId,
userID: userIdData
)
let authController = ASAuthorizationController(authorizationRequests: [registrationRequest])
authController.delegate = self
authController.presentationContextProvider = self
authController.performRequests()
In addition to the above code, I would be grateful if anyone could advise on how to configure fields like rp, authenticatorSelection, attestation, extensions, and pubKeyCredParams as well. Furthermore, I would appreciate any insights into the benefits of setting each of these fields in iOS, and any security considerations to be aware of.
If anyone has experience with this, your guidance would be extremely helpful. Thank you very much in advance!
I have a SwiftUI app that I've been working on in XCode 16.1. The project builds and runs in the simulators, on my mac and on my iPhone/iPad without any issues. I'm also able to build my unit test project and run them without any errors. The project has zero warnings in it.
When I go to the Edit Schemes options and change the Run scheme to be a Release build with the Debug Executable unchecked I get a compiler error:
Command SwiftCompile failed with a nonzero exit code
I've attempted this Release Run with the following target devices in XCode:
My iPhone 15 Pro Max (iOS 18.2 Beta 3)
MacBook Air (M1) (15.2 Beta)
iPhone 16 Simulator (iOS 18.1)
Any iOS Simulator Device (arm64, x86_64)
All 3 of these target have the same issue. Normally I would just debug the error from the logs but when I look at the build output I can't see any information in the log to tell me what happened. It looks like the source files are sent into the SwiftCompiler and the compiler fails without bubbling up the issue.
I've provided the full error log export as a Gist HERE due to it's size. Is there anything in the log I'm missing? Is there a way for me to turn on more verbose logging during compilation of a Release Build?
I created a brand new Multiplatform App in XCode and I added all of my source files to it. No project configuration settings were changed. I could build it successfully with the debug configuration. I then changed it to the Release configuration and experienced the same error. I can create another fresh project and make the same release configuration with none of my source files in it and get a successful build. I
t seems there is something wrong with my source files and the release configuration but the compiler doesn't indicate what. I'm lost at this point as I can't figure out how to get a release build and can't seem to find any indication as to why.
Since I installed a beta update recently I have had issues with my phone downloading the contacts to my Suzuki Swift.
Today I noticed that the Suzuki Connect App would not open up stating that my 2 month old iPhone 16 had been jailbroken whatever that might mean.
i have only ever installed apps from the App Store and updates notified by Apple so why is just this one App telling me my phone has been Jailbroken?
I contacted Suzuki and they have no idea what the problem might be so I’m hoping someone in the community might be able to help me get everything fixed or be able to tell me more about my issue.
Hi, would anyone be so kind and try to guide me, which technologies, Kits, APIs, approaches etc. are useful for creating a horizontal window with map (preferrably MapKit) on visionOS using SwiftUI?
I was hoping to achieve scenario: User can walk around and interact with horizontal map window and also interact with (3D) pins on the map. Similar thing was done by SAP in their "SAP Analytics Cloud" app (second image from top).
Since I am complete beginner in this area, I was looking for a clean, simple solution. I need to know, if AR/RealityKit is necessary or is this achievable only using native SwiftUI? I tried using just Map() with .rotation3DEffect() which actually makes the map horizontal, but gestures on the map are out of sync and I really dont know, if this approach is valid or complete rubbish.
Any feedback appreciated.
Hello! Currently watching the Envision the Future: Build great apps for visionOS" webinar, and lots of questions coming up. Thx for offering this online!
For those of us with "VR legs", how can we go about setting up custom hand/finger gestures that would enable us to add the functionality for teleporting and navigating within our fully Immersive environments? Both smooth, and snap turn/teleport options would be great, thx! This is adjacent to my previous question on how to setup a PS5 controller to do something similar. Think Half-Life: ALYX as the gold standard for VR navigation.