I have a CALayer and I'd like to animate a property on it. But, the property that triggers the animation change is different to the one that is being changed. A basic example of what I'm trying to do is below. I'm trying to create an animation on count by changing triggerProperty. This example is simplified (in my project, the triggerProperty is not an Int, but a more complex non-animatable type. So, I'm trying to animate it by creating animations for some of it's properties that can be matched to CABasicAnimation - and rendering a version of that class based on the interpolated values).
@objc
class AnimatableLayer: CALayer {
@NSManaged var triggerProperty: Int
@NSManaged var count: Int
override init() {
super.init()
triggerProperty = 1
setNeedsDisplay()
}
override init(layer: Any) {
super.init(layer: layer)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override class func needsDisplay(forKey key: String) -> Bool {
return key == String(keypath: \AnimatableLayer.triggerProperty) || super.needsDisplay(forKey: key)
}
override func action(forKey event: String) -> (any CAAction)? {
if event == String(keypath: \AnimatableLayer.triggerProperty) {
if let presentation = self.presentation() {
let keyPath = String(keypath: \AnimatableLayer.count)
let animation = CABasicAnimation(keyPath: keyPath)
animation.duration = 2.0
animation.timingFunction = CAMediaTimingFunction(name: CAMediaTimingFunctionName.linear)
animation.fromValue = presentation.count
animation.toValue = 10
return animation
}
}
return super.action(forKey: event)
}
override func draw(in ctx: CGContext) {
print("draw")
NSGraphicsContext.saveGraphicsState()
let nsctx = NSGraphicsContext(cgContext: ctx, flipped: true) // create NSGraphicsContext
NSGraphicsContext.current = nsctx // set current context
let renderText = NSAttributedString(string: "\(self.presentation()?.count ?? self.count)", attributes: [.font: NSFont.systemFont(ofSize: 30)])
renderText.draw(in: bounds)
NSGraphicsContext.restoreGraphicsState()
}
func animate() {
print("animate")
self.triggerProperty = 10
}
}
With this code, the animation isn't triggered. It seems to get triggered only if the animation's keypath matches the one on the event (in the action func).
Is it possible to do something like this?
Overview
Post
Replies
Boosts
Views
Activity
Why is gnu++14 specified locally and gnu++20 specified in CI?
Our app involves using the camera to scan barcodes or QR codes, with a working distance of about 5 cm. However, we’ve noticed variations in the focus distance of camera lenses across different iPhone models.
Currently, we mainly use two types of lenses: wide-angle and ultra-wide-angle.
• For iPhone 13 and earlier models, we use the wide-angle lens.
• For iPhone 13 Pro and later models, we use the ultra-wide-angle lens.
We are not certain if this setup is correct since we don’t have all iPhone models to test.
There is a users have reported focus issues on his iPhone 15.
We would like to ask if there’s a resource where we can find the minimum focus distance of different cameras in each iPhone model. This is to verify whether our current configuration is accurate.
Alternatively, if such data is not readily available, could apple tam advise which camera should be used on various iPhone models for scenarios with a working distance of approximately 5 cm?
Thank you!
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution.
I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less.
When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one.
I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth
My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one.
How can I solve this problem.
Why did I receive a CONSUMPTION_REQUEST for the same transaction_id the day after I already received a REFUND for it?
I received a CONSUMPTION_REQUEST, but due to a program error, I failed to submit the customer information. Later, I was notified with a REFUND, indicating that the transaction had been refunded. However, the next day, I received another CONSUMPTION_REQUEST notification for the same transaction_id.
Hi!
I'm creating an app like this:
Using Image Tracking to set world anchor in real world first.
The timeline in Reality Composer Pro scene needs to be played in same time(for the people in same place using the app).
People using the app will see the same contents in same position in same time in same place.
I already made Image Tracking feature worked. But the big problem is "Synchronization". I found Group Activities and TabletopKit to solve the problem. But I don't know if this are the right modules for this project.
How do I solve this problem technically?
If you have ideas, please let me know. I really need help for this.
I have a SwiftUI based program that has compiled and run consistently on previous macos versions. After upgrading to 15.2 beta 4 to address a known issue with TabView in 15.1.1, my app is now entering a severe hang and crashing with:
"The window has been marked as needing another Update Contraints in Window pass, but it has already had more Update Constraints in Window passes than there are views in the window. .<SwiftUI.AppKitWindow: 0x11d82a800> 0x87 (2071) {{44,0},{1468,883}} en"
Is there a known bug that could be causing this crash or known change in the underlying layout model?
Hello, I am currently implementing a biometric authentication registration flow using WebAuthn. I am using ASAuthorizationPlatformPublicKeyCredentialRegistrationRequest, and I would like to know if there is a way to hide the "Save to another device" option that appears during the registration process.
Specifically, I want to guide users to save the passkey only locally on their device, without prompting them to save it to iCloud Keychain or another device.
If there is a way to hide this option or if there is a recommended approach to achieve this, I would greatly appreciate your guidance.
Also, if this is not possible due to iOS version or API limitations, I would be grateful if you could share any best practices for limiting user options in this scenario.
If anyone has experienced a similar issue, your advice would be very helpful. Thank you in advance.
Hello, I am currently working on implementing credential registration for biometric authentication using WebAuthn in an iOS app. I am using ASAuthorizationPlatformPublicKeyCredentialProvider to create a credential registration request based on the data retrieved from the WebAuthn options endpoint.
At the moment, I am only using user.id, user.name, and challenge from the options response, and I am unsure how to utilize the other fields effectively. I would greatly appreciate advice on how to use the following fields:
**Fields I would like to use:
**
rp (Relying Party)
I am retrieving id and name, but I am not sure how best to pass and utilize these fields. Is there an explicit way to use them?
authenticatorSelection
How can I set requireResidentKey and userVerification in ASAuthorizationPlatformPublicKeyCredentialRegistrationRequest? Also, what are the specific benefits of using these fields?
timeout
Is there a way to reflect the timeout value in the credential registration request, and what would be the best way to handle this information in iOS?
attestation
The attestation field can contain values such as none or direct. How should I reflect this in the credential registration request for iOS? I would appreciate a sample implementation or guidance on the benefits of setting this field.
extensions
If I want to customize the authentication flow using the extensions field, how can I appropriately reflect this in iOS? For instance, how can I utilize extensions like credProps?
pubKeyCredParams
Regarding pubKeyCredParams, which is a list of supported public key algorithms, I am unsure how to use it to select an appropriate algorithm in iOS. How should I incorporate this information into the request?
excludeCredentials
I understand that setting excludeCredentials can prevent duplicate registration, but I am not sure how to use past credential information to set it effectively. Any advice on this would be appreciated.
**Current Code
**
Currently, I have implemented the following code, but I am struggling to understand how to add and configure the fields mentioned above.
let publicKeyCredentialProvider = ASAuthorizationPlatformPublicKeyCredentialProvider(
relyingPartyIdentifier: "www.example.com"
)
let registrationRequest = publicKeyCredentialProvider.createCredentialRegistrationRequest(
challenge: challenge,
name: userId,
userID: userIdData
)
let authController = ASAuthorizationController(authorizationRequests: [registrationRequest])
authController.delegate = self
authController.presentationContextProvider = self
authController.performRequests()
In addition to the above code, I would be grateful if anyone could advise on how to configure fields like rp, authenticatorSelection, attestation, extensions, and pubKeyCredParams as well. Furthermore, I would appreciate any insights into the benefits of setting each of these fields in iOS, and any security considerations to be aware of.
If anyone has experience with this, your guidance would be extremely helpful. Thank you very much in advance!
I have a SwiftUI app that I've been working on in XCode 16.1. The project builds and runs in the simulators, on my mac and on my iPhone/iPad without any issues. I'm also able to build my unit test project and run them without any errors. The project has zero warnings in it.
When I go to the Edit Schemes options and change the Run scheme to be a Release build with the Debug Executable unchecked I get a compiler error:
Command SwiftCompile failed with a nonzero exit code
I've attempted this Release Run with the following target devices in XCode:
My iPhone 15 Pro Max (iOS 18.2 Beta 3)
MacBook Air (M1) (15.2 Beta)
iPhone 16 Simulator (iOS 18.1)
Any iOS Simulator Device (arm64, x86_64)
All 3 of these target have the same issue. Normally I would just debug the error from the logs but when I look at the build output I can't see any information in the log to tell me what happened. It looks like the source files are sent into the SwiftCompiler and the compiler fails without bubbling up the issue.
I've provided the full error log export as a Gist HERE due to it's size. Is there anything in the log I'm missing? Is there a way for me to turn on more verbose logging during compilation of a Release Build?
I created a brand new Multiplatform App in XCode and I added all of my source files to it. No project configuration settings were changed. I could build it successfully with the debug configuration. I then changed it to the Release configuration and experienced the same error. I can create another fresh project and make the same release configuration with none of my source files in it and get a successful build. I
t seems there is something wrong with my source files and the release configuration but the compiler doesn't indicate what. I'm lost at this point as I can't figure out how to get a release build and can't seem to find any indication as to why.
Since I installed a beta update recently I have had issues with my phone downloading the contacts to my Suzuki Swift.
Today I noticed that the Suzuki Connect App would not open up stating that my 2 month old iPhone 16 had been jailbroken whatever that might mean.
i have only ever installed apps from the App Store and updates notified by Apple so why is just this one App telling me my phone has been Jailbroken?
I contacted Suzuki and they have no idea what the problem might be so I’m hoping someone in the community might be able to help me get everything fixed or be able to tell me more about my issue.
Hi, would anyone be so kind and try to guide me, which technologies, Kits, APIs, approaches etc. are useful for creating a horizontal window with map (preferrably MapKit) on visionOS using SwiftUI?
I was hoping to achieve scenario: User can walk around and interact with horizontal map window and also interact with (3D) pins on the map. Similar thing was done by SAP in their "SAP Analytics Cloud" app (second image from top).
Since I am complete beginner in this area, I was looking for a clean, simple solution. I need to know, if AR/RealityKit is necessary or is this achievable only using native SwiftUI? I tried using just Map() with .rotation3DEffect() which actually makes the map horizontal, but gestures on the map are out of sync and I really dont know, if this approach is valid or complete rubbish.
Any feedback appreciated.
Hello! Currently watching the Envision the Future: Build great apps for visionOS" webinar, and lots of questions coming up. Thx for offering this online!
For those of us with "VR legs", how can we go about setting up custom hand/finger gestures that would enable us to add the functionality for teleporting and navigating within our fully Immersive environments? Both smooth, and snap turn/teleport options would be great, thx! This is adjacent to my previous question on how to setup a PS5 controller to do something similar. Think Half-Life: ALYX as the gold standard for VR navigation.
I created 2 different schemas, and made a small change to one of them. I added a property to the model called "version". To see if the migration went through, I setup the migration plan to set version to "1.1.0" in willMigrate. In the didMigrate, I looped through the new version of Tags to check if version was set, and if not, set it. I did this incase the willMigrate didn't do what it was supposed to. The app built and ran successfully, but version was not set in the Tag I created in the app.
Here's the migration:
enum MigrationPlanV2: SchemaMigrationPlan {
static var schemas: [any VersionedSchema.Type] {
[DataSchemaV1.self, DataSchemaV2.self]
}
static let stage1 = MigrationStage.custom(
fromVersion: DataSchemaV1.self,
toVersion: DataSchemaV2.self,
willMigrate: { context in
let oldTags = try? context.fetch(FetchDescriptor<DataSchemaV1.Tag>())
for old in oldTags ?? [] {
let new = Tag(name: old.name, version: "Version 1.1.0")
context.delete(old)
context.insert(new)
}
try? context.save()
},
didMigrate: { context in
let newTags = try? context.fetch(FetchDescriptor<DataSchemaV2.Tag>())
for tag in newTags ?? []{
if tag.version == nil {
tag.version = "1.1.0"
}
}
}
)
static var stages: [MigrationStage] {
[stage1]
}
}
Here's the model container:
var sharedModelContainer: ModelContainer = {
let schema = Schema(versionedSchema: DataSchemaV2.self)
let modelConfiguration = ModelConfiguration(schema: schema, isStoredInMemoryOnly: false)
do {
return try ModelContainer(
for: schema,
migrationPlan: MigrationPlanV2.self,
configurations: [modelConfiguration])
} catch {
fatalError("Could not create ModelContainer: \(error)")
}
}()
I ran a similar test prior to this, and got the same result. It's like the code in my willMigrate isn't running. I also had print statements in there that I never saw printed to the console. I tried to check the CloudKit console for any information, but I'm having issues with that as well (separate post).
Anyways, how can I confirm that my migration was successful here?
I previously got this error when I used a PassKey to log in. I'm not using that. I've put my password in multiple times, closed the browser, etc. It doesn't seem to be working. The only thing I can think is that because my Mac is using a different iCloud account, it's not letting me.
Does anyone have any ideas?
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations.
My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller.
Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this?
Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
I was able to confirm with a customer of mine that calling copyfile with a source file that is a symbolic link on a NTFS partition always causes the error
NSPOSIXErrorDomain 12 Cannot allocate memory
They use NTFS drivers from Paragon.
They tried copying a symbolic link from NTFS to both APFS and NTFS with the same result. Is this an issue with macOS, or with the NTFS driver?
Copying regular files on the other hand always works. Copying manually from the Finder also seems to always work, both with regular files and symbolic links, so I'm wondering how the Finder does it.
Here is the sample app that they used to reproduce the issue. The first open panel allows to select the source directory and the second one the destination directory. The variable filename holds the name of the symbolic link to be copied from the source to the destination. Apparently it's not possible to select a symbolic link directly in NSOpenPanel, as it always resolves to the linked file.
@main
class AppDelegate: NSObject, NSApplicationDelegate {
func applicationDidFinishLaunching(_ notification: Notification) {
let openPanel = NSOpenPanel()
openPanel.canChooseDirectories = true
openPanel.canChooseFiles = false
openPanel.runModal()
let filename = "Modules"
let source = openPanel.urls[0].appendingPathComponent(filename)
openPanel.runModal()
let destination = openPanel.urls[0].appendingPathComponent(filename)
do {
let state = copyfile_state_alloc()
defer {
copyfile_state_free(state)
}
var bsize = UInt32(16_777_216)
if copyfile_state_set(state, UInt32(COPYFILE_STATE_BSIZE), &bsize) != 0 {
throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
}
if copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CB), unsafeBitCast(copyfileCallback, to: UnsafeRawPointer.self)) != 0 || copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CTX), unsafeBitCast(self, to: UnsafeRawPointer.self)) != 0 || copyfile(source.path, destination.path, state, copyfile_flags_t(COPYFILE_NOFOLLOW)) != 0 {
throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
}
} catch {
let error = error as NSError
let alert = NSAlert()
alert.messageText = "\(error.localizedDescription)\n\(error.domain) \(error.code)"
alert.runModal()
}
}
private let copyfileCallback: copyfile_callback_t = { what, stage, state, src, dst, ctx in
if what == COPYFILE_COPY_DATA {
if stage == COPYFILE_ERR {
return COPYFILE_QUIT
}
var size: off_t = 0
copyfile_state_get(state, UInt32(COPYFILE_STATE_COPIED), &size)
}
return COPYFILE_CONTINUE
}
}
Hi all,
I‘m using the certificates API in order to create a development certificate. I want to create a Jenkins job that will give employees an option to create a certificate without giving them admin rights.
I’m creating a new certificate without any issues.
When I try to create another certificate with a different CSR (for a diff user) I get an error that a certificate already exists.
Is it limited to create only one certificate per API key??
Thanks!
Hello,
I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces?
About my app:
There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user.
If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
Hello - Asking a question that's probably been asked, and maybe a couple more.
I bought a membership yesterday and received the email, I'm assuming it's "approved", but when I log in on developer account it says "PENDING".
I will be the only user on my account. I already have the app, it's been tested by another party and all I need do is make some minor changes. So that's done, now I need to create a "debug or release testing distribution", but when I do that I get this error in xCode. Is this normal...until my status changes from "PENDING" to (guessing) APPROVED?
And one further question. I am the only developer, and this app will only be used by one person as it's specialized for a single purpose. What are the option for that person to install the app? Is there a special section on the app store for such an app?
Thanks,
Gary