Overview

Post

Replies

Boosts

Views

Activity

How to utilize each field of WebAuthn Options for implementation on iOS?
Hello, I am currently working on implementing credential registration for biometric authentication using WebAuthn in an iOS app. I am using ASAuthorizationPlatformPublicKeyCredentialProvider to create a credential registration request based on the data retrieved from the WebAuthn options endpoint. At the moment, I am only using user.id, user.name, and challenge from the options response, and I am unsure how to utilize the other fields effectively. I would greatly appreciate advice on how to use the following fields: **Fields I would like to use: ** rp (Relying Party) I am retrieving id and name, but I am not sure how best to pass and utilize these fields. Is there an explicit way to use them? authenticatorSelection How can I set requireResidentKey and userVerification in ASAuthorizationPlatformPublicKeyCredentialRegistrationRequest? Also, what are the specific benefits of using these fields? timeout Is there a way to reflect the timeout value in the credential registration request, and what would be the best way to handle this information in iOS? attestation The attestation field can contain values such as none or direct. How should I reflect this in the credential registration request for iOS? I would appreciate a sample implementation or guidance on the benefits of setting this field. extensions If I want to customize the authentication flow using the extensions field, how can I appropriately reflect this in iOS? For instance, how can I utilize extensions like credProps? pubKeyCredParams Regarding pubKeyCredParams, which is a list of supported public key algorithms, I am unsure how to use it to select an appropriate algorithm in iOS. How should I incorporate this information into the request? excludeCredentials I understand that setting excludeCredentials can prevent duplicate registration, but I am not sure how to use past credential information to set it effectively. Any advice on this would be appreciated. **Current Code ** Currently, I have implemented the following code, but I am struggling to understand how to add and configure the fields mentioned above. let publicKeyCredentialProvider = ASAuthorizationPlatformPublicKeyCredentialProvider( relyingPartyIdentifier: "www.example.com" ) let registrationRequest = publicKeyCredentialProvider.createCredentialRegistrationRequest( challenge: challenge, name: userId, userID: userIdData ) let authController = ASAuthorizationController(authorizationRequests: [registrationRequest]) authController.delegate = self authController.presentationContextProvider = self authController.performRequests() In addition to the above code, I would be grateful if anyone could advise on how to configure fields like rp, authenticatorSelection, attestation, extensions, and pubKeyCredParams as well. Furthermore, I would appreciate any insights into the benefits of setting each of these fields in iOS, and any security considerations to be aware of. If anyone has experience with this, your guidance would be extremely helpful. Thank you very much in advance!
0
0
49
10h
SwiftUI: Command SwiftCompile failed with a nonzero exit code
I have a SwiftUI app that I've been working on in XCode 16.1. The project builds and runs in the simulators, on my mac and on my iPhone/iPad without any issues. I'm also able to build my unit test project and run them without any errors. The project has zero warnings in it. When I go to the Edit Schemes options and change the Run scheme to be a Release build with the Debug Executable unchecked I get a compiler error: Command SwiftCompile failed with a nonzero exit code I've attempted this Release Run with the following target devices in XCode: My iPhone 15 Pro Max (iOS 18.2 Beta 3) MacBook Air (M1) (15.2 Beta) iPhone 16 Simulator (iOS 18.1) Any iOS Simulator Device (arm64, x86_64) All 3 of these target have the same issue. Normally I would just debug the error from the logs but when I look at the build output I can't see any information in the log to tell me what happened. It looks like the source files are sent into the SwiftCompiler and the compiler fails without bubbling up the issue. I've provided the full error log export as a Gist HERE due to it's size. Is there anything in the log I'm missing? Is there a way for me to turn on more verbose logging during compilation of a Release Build? I created a brand new Multiplatform App in XCode and I added all of my source files to it. No project configuration settings were changed. I could build it successfully with the debug configuration. I then changed it to the Release configuration and experienced the same error. I can create another fresh project and make the same release configuration with none of my source files in it and get a successful build. I t seems there is something wrong with my source files and the release configuration but the compiler doesn't indicate what. I'm lost at this point as I can't figure out how to get a release build and can't seem to find any indication as to why.
1
0
47
11h
Problems with Car Software Since updating
Since I installed a beta update recently I have had issues with my phone downloading the contacts to my Suzuki Swift. Today I noticed that the Suzuki Connect App would not open up stating that my 2 month old iPhone 16 had been jailbroken whatever that might mean. i have only ever installed apps from the App Store and updates notified by Apple so why is just this one App telling me my phone has been Jailbroken? I contacted Suzuki and they have no idea what the problem might be so I’m hoping someone in the community might be able to help me get everything fixed or be able to tell me more about my issue.
0
0
62
12h
Horizontal window/map on visionOS
Hi, would anyone be so kind and try to guide me, which technologies, Kits, APIs, approaches etc. are useful for creating a horizontal window with map (preferrably MapKit) on visionOS using SwiftUI? I was hoping to achieve scenario: User can walk around and interact with horizontal map window and also interact with (3D) pins on the map. Similar thing was done by SAP in their "SAP Analytics Cloud" app (second image from top). Since I am complete beginner in this area, I was looking for a clean, simple solution. I need to know, if AR/RealityKit is necessary or is this achievable only using native SwiftUI? I tried using just Map() with .rotation3DEffect() which actually makes the map horizontal, but gestures on the map are out of sync and I really dont know, if this approach is valid or complete rubbish. Any feedback appreciated.
0
0
52
12h
Program finger/hand gesture for snap turn/teleport navigation
Hello! Currently watching the Envision the Future: Build great apps for visionOS" webinar, and lots of questions coming up. Thx for offering this online! For those of us with "VR legs", how can we go about setting up custom hand/finger gestures that would enable us to add the functionality for teleporting and navigating within our fully Immersive environments? Both smooth, and snap turn/teleport options would be great, thx! This is adjacent to my previous question on how to setup a PS5 controller to do something similar. Think Half-Life: ALYX as the gold standard for VR navigation.
0
0
54
12h
SwiftData: How can I tell if a migration went through successfully?
I created 2 different schemas, and made a small change to one of them. I added a property to the model called "version". To see if the migration went through, I setup the migration plan to set version to "1.1.0" in willMigrate. In the didMigrate, I looped through the new version of Tags to check if version was set, and if not, set it. I did this incase the willMigrate didn't do what it was supposed to. The app built and ran successfully, but version was not set in the Tag I created in the app. Here's the migration: enum MigrationPlanV2: SchemaMigrationPlan { static var schemas: [any VersionedSchema.Type] { [DataSchemaV1.self, DataSchemaV2.self] } static let stage1 = MigrationStage.custom( fromVersion: DataSchemaV1.self, toVersion: DataSchemaV2.self, willMigrate: { context in let oldTags = try? context.fetch(FetchDescriptor<DataSchemaV1.Tag>()) for old in oldTags ?? [] { let new = Tag(name: old.name, version: "Version 1.1.0") context.delete(old) context.insert(new) } try? context.save() }, didMigrate: { context in let newTags = try? context.fetch(FetchDescriptor<DataSchemaV2.Tag>()) for tag in newTags ?? []{ if tag.version == nil { tag.version = "1.1.0" } } } ) static var stages: [MigrationStage] { [stage1] } } Here's the model container: var sharedModelContainer: ModelContainer = { let schema = Schema(versionedSchema: DataSchemaV2.self) let modelConfiguration = ModelConfiguration(schema: schema, isStoredInMemoryOnly: false) do { return try ModelContainer( for: schema, migrationPlan: MigrationPlanV2.self, configurations: [modelConfiguration]) } catch { fatalError("Could not create ModelContainer: \(error)") } }() I ran a similar test prior to this, and got the same result. It's like the code in my willMigrate isn't running. I also had print statements in there that I never saw printed to the console. I tried to check the CloudKit console for any information, but I'm having issues with that as well (separate post). Anyways, how can I confirm that my migration was successful here?
0
0
68
12h
Control Player Camera with PS5 Controller on Vision Pro
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations. My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller. Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this? Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
0
0
56
13h
copyfile causes NSPOSIXErrorDomain 12 "Cannot allocate memory" when copying symbolic link from NTFS partition
I was able to confirm with a customer of mine that calling copyfile with a source file that is a symbolic link on a NTFS partition always causes the error NSPOSIXErrorDomain 12 Cannot allocate memory They use NTFS drivers from Paragon. They tried copying a symbolic link from NTFS to both APFS and NTFS with the same result. Is this an issue with macOS, or with the NTFS driver? Copying regular files on the other hand always works. Copying manually from the Finder also seems to always work, both with regular files and symbolic links, so I'm wondering how the Finder does it. Here is the sample app that they used to reproduce the issue. The first open panel allows to select the source directory and the second one the destination directory. The variable filename holds the name of the symbolic link to be copied from the source to the destination. Apparently it's not possible to select a symbolic link directly in NSOpenPanel, as it always resolves to the linked file. @main class AppDelegate: NSObject, NSApplicationDelegate { func applicationDidFinishLaunching(_ notification: Notification) { let openPanel = NSOpenPanel() openPanel.canChooseDirectories = true openPanel.canChooseFiles = false openPanel.runModal() let filename = "Modules" let source = openPanel.urls[0].appendingPathComponent(filename) openPanel.runModal() let destination = openPanel.urls[0].appendingPathComponent(filename) do { let state = copyfile_state_alloc() defer { copyfile_state_free(state) } var bsize = UInt32(16_777_216) if copyfile_state_set(state, UInt32(COPYFILE_STATE_BSIZE), &amp;bsize) != 0 { throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno)) } if copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CB), unsafeBitCast(copyfileCallback, to: UnsafeRawPointer.self)) != 0 || copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CTX), unsafeBitCast(self, to: UnsafeRawPointer.self)) != 0 || copyfile(source.path, destination.path, state, copyfile_flags_t(COPYFILE_NOFOLLOW)) != 0 { throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno)) } } catch { let error = error as NSError let alert = NSAlert() alert.messageText = "\(error.localizedDescription)\n\(error.domain) \(error.code)" alert.runModal() } } private let copyfileCallback: copyfile_callback_t = { what, stage, state, src, dst, ctx in if what == COPYFILE_COPY_DATA { if stage == COPYFILE_ERR { return COPYFILE_QUIT } var size: off_t = 0 copyfile_state_get(state, UInt32(COPYFILE_STATE_COPIED), &amp;size) } return COPYFILE_CONTINUE } }
0
0
72
14h
App Store Connect Certificates API
Hi all, I‘m using the certificates API in order to create a development certificate. I want to create a Jenkins job that will give employees an option to create a certificate without giving them admin rights. I’m creating a new certificate without any issues. When I try to create another certificate with a different CSR (for a diff user) I get an error that a certificate already exists. Is it limited to create only one certificate per API key?? Thanks!
0
0
73
14h
AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
0
0
51
14h
Time to Have Membership Completed?
Hello - Asking a question that's probably been asked, and maybe a couple more. I bought a membership yesterday and received the email, I'm assuming it's "approved", but when I log in on developer account it says "PENDING". I will be the only user on my account. I already have the app, it's been tested by another party and all I need do is make some minor changes. So that's done, now I need to create a "debug or release testing distribution", but when I do that I get this error in xCode. Is this normal...until my status changes from "PENDING" to (guessing) APPROVED? And one further question. I am the only developer, and this app will only be used by one person as it's specialized for a single purpose. What are the option for that person to install the app? Is there a special section on the app store for such an app? Thanks, Gary
2
0
63
14h
sourceImageURL in imagePlaygroundSheet isn't optional
I can't shake the "I don't think I did this correctly" feeling about a change I'm making for Image Playground support. When you create an image via an Image Playground sheet it returns a URL pointing to where the image is temporarily stored. Just like the Image Playground app I want the user to be able to decide to edit that image more. The Image Playground sheet lets you pass in a source URL for an image to start with, which is perfect because I could pass in the URL of that temp image. But the URL is NOT optional. So what do I populate it with when the user is starting from scratch? A friendly AI told me to use URL(string: "")! but that crashes when it gets forced unwrapped. URL(string: "about:blank")! seems to work in that it is ignored (and doesn't crash) when I have the user create the initial image (that shouldn't have a source image). This feels super clunky to me. Am I overlooking something?
0
0
54
15h
Family Controls Usage Data
Hi all, For context, the Family Controls entitlement request (for the Personal Device Management category/individual use case) includes the question: Will your app share device or usage data beyond the individual for the individual use case, or Family Sharing for the parent/guardian use case, including through means such as screenshots, screen recordings, or server logging? I'm looking for clarification on how to interpret this. I originally answered Yes and was rejected, then later answered No and was accepted. Ideally, I would like my screen time management app to allow users to opt-in to social features. One simple example is opting into a leaderboard with your friends for who has the lowest screen time. If the user installed this app for themself and chooses to share this basic data with their friends, it sounds like an ethical and unproblematic feature but I suppose storing that data would fall under "server logging"? If anyone has any experience with this, I would appreciate a more explicit description of the requirement above. Is what I described allowed? Thanks for reading!
0
0
59
15h
Captured photos in wrong orientation
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval. I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected. When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter. What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
0
0
54
15h
Increased and Mismatched Audio Buffer Sizes on iOS 18 when Sound Recognition or Vocal Shortcuts Is Enabled
Description As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency. Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced. Steps to Reproduce: Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center). Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug) Build and install the example project Attach a headset and launch the application Observe console logs showing a requested buffer size of 0.005805 (256 samples @ 48k) an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests) Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback) Launch the application Observe Same result from step 4 Mismatched hardware buffer size of 1104 and recorded frame count of 1024 Mismatched playbackCount and recordCount Quit the app and disable vocal shortcuts Launch the app Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior) Expected results: Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced Input and output buffer sizes match Device(s): iPhone 11 Pro, iPad Pro OS: iOS 18.0.1 Environment: Xcode 16.1 FB: FB15715421 Related to: https://forums.developer.apple.com/forums/thread/765477
0
0
67
15h
AVAudioUnitTimePitch: speeding up introduces artifacts
For an upcoming update of one of my apps, I’m facing an issue: The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch. However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32). Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down?? I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result. Grateful for any input on this 🙏
0
0
47
15h
D3DMetal unsupported CheckFeatureSupport query 53 while running simple vulkaninfo using Mesa 24.3 Dozen (Vulkanon12) driver..
Hi, wanted to test if possible to use Mesa3D Dozen driver(Vulkan on D3D12 )+D3DMetal 2b3 to get maybe better Vulkan driver on Wine than default MoltenVK.. this will support Vulkan windows apps via using D3D12Metal.. using vulkan_dzn.dll,dzn_icd.x86_64.json,dxil.dll from x64 folder from: https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z using simple vulkaninfo app and running like: wine64 vulkaninfo I get error: [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 also seems D3DMetal Wine integration on Whisky doesn't expose d3d12core.dll and d3d12.dll like new Agility D3D12 dlls or VKD3D, so getting: MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore but anyways seems to try to load the driver as: WARNING: dzn is not a conformant Vulkan implementation, testing use only. full log: MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E328. MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E578. ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Call to 'vkEnumeratePhysicalDevices' in ICD c:\windows\system32\.\vulkan_dzn.dll failed with error code -3 ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Failed to detect any valid GPUs in the current config ERROR at C:\j\msdk0\build\Khronos-Tools\repo\vulkaninfo\vulkaninfo.h:241:vkEnumeratePhysicalDevices failed with ERROR_INITIALIZATION_FAILED
0
0
56
15h
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
2
52
16h