Posts

Post marked as solved
23 Replies
7.9k Views
I have seen this question come up a few times here on Apple Developer forums (recently noted here - https://developer.apple.com/forums/thread/655505), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal. In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so. From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export. How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?
Posted Last updated
.
Post not yet marked as solved
1 Replies
627 Views
Within my app, I have an Image that I am modifying with several modifiers to create an ideal appearance (code sample below). When taking this approach, I am finding that anything that is "underneath" the Image becomes unusable. In my case, I have a VStack with a Button and the Image. When the Image modifier of clipped() is applied, the Button becomes unusable (presumably because the Image is technically covering the button, but anything outside of the Image's frame is invisible). Is there a means of allowing an object below a clipped Image to still be functional/receive touches? VStack { 	 Button(action: { 			print("tapped!") 	 }, label: { 			Text("Tap Here") 	 }) 	 Image(uiImage: myImage) 			.resizable() 			.aspectRatio(contentMode: .fill) 			.frame(height: 150.0) 			.clipped() } I can confirm that if I change the aspectRatio to .fit, the issue does not appear (but, of course, my Image does not appear as I'd like it to). Subsequently, if I remove the .clipped() modifier, the issue is resolved (but, again, the Image then does not appear as I'd like it to).
Posted Last updated
.
Post not yet marked as solved
4 Replies
868 Views
As noted on the comparison page for Apple Watch - Series 6 - https://www.apple.com/watch/compare/, the U1 chip (Ultra Wideband) is a feature of the Apple Watch - Series 6. The WWDC 2020 session, Meet Nearby Interaction - https://developer.apple.com/videos/play/wwdc2020/10668/, does imply that this functionality exists on devices with a U1 chip, though the NearbyInteraction framework appears unavailable in watchOS. Can anyone confirm whether NearbyInteraction is available for watchOS?
Posted Last updated
.
Post not yet marked as solved
3 Replies
3.9k Views
I am attempting to set up a Text property that shows a "timer" based countdown. My code is like so; VStack { 	 Text(Date().addingTimeInterval(600), style: .relative) } When I preview this code in a traditional SwiftUI view, the code appears as expected; in the middle of the canvas (as there are no vertical or horizontal spacers). Conversely, when I attempt to use the same code within a Widget, I find that the text is pushed all the way to the left side of the canvas, with no particular reason. Due to this, I have no way of centering the text. My only success in centering the text has been to embed in a HStack with multiple spacers; HStack { 		Spacer() 		Spacer() 		Spacer() 		Spacer() 		Text(Date().addingTimeInterval(600), style: .relative) } Is there any particular reason this would be the case? I've not found any documentation indicating that the manner in which WidgetKit views render Text would be any different than traditional SwiftUI views?
Posted Last updated
.
Post not yet marked as solved
1 Replies
573 Views
Many apps that I download from the App Store seem to be adding shortcuts to the Shortcuts app, without me ever setting up a voice command. I was under the impression that to add a shortcut to the Shortcuts app, a user would need to create a voice command, via INUIAddVoiceShortcutViewController, which would then add the shortcut to the Shortcuts app. This is how I am currently adding a shortcut in my app, but am wondering how I could go about offering shortcuts in the Shortcuts app without needing to call INUIAddVoiceShortcutViewController?  let activity = NSUserActivity(activityType: "com.example.shortcut")  activity.title = "Sample Shortcut"  activity.userInfo = ["speech" : "This is a sample."]  activity.isEligibleForSearch = true  activity.isEligibleForPrediction = true activity.persistentIdentifier = "com.example.shortcut.myshortcut"            self.view.userActivity = activity  activity.becomeCurrent()              let siriShortcut = INShortcut(userActivity: activity)              // Setup view controller  let viewController = INUIAddVoiceShortcutViewController(shortcut: siriShortcut)              // Setup modal style  viewController.modalPresentationStyle = .formSheet              // Setup delegate  viewController.delegate = self              // Show view controller  DispatchQueue.main.async {     self.present(viewController, animated: true, completion: nil)  }
Posted Last updated
.
Post not yet marked as solved
0 Replies
394 Views
I have been exploring the sample image capture app, as well as command-line tool, for object capture. I've not yet figured out the best practices for capturing the bottom of images. For example, I have been attempting to demonstrate this with a sneaker. Per the WWDC21-10076 session, I have been circling around my object, taking photos using the sample capture app, using the automatic capture mode. While this is creating a 3D model, during my capture, I also turn my sneaker over, and capture the bottom. However, when my 3D model is created via the command-line tool, the "bottom" of the sneaker is always missing. Is there a given configuration when creating the PhotoGrammetrySession.Configuration that would be ideal for also including photos of the bottom of objects? While I pause, rotate my object to show the bottom, and continue capturing, I find that the bottom of the object is nearly always missing, despite many image captures that do include the bottom.
Posted Last updated
.
Post not yet marked as solved
3 Replies
2.3k Views
I'm curious if anyone has discovered a way to determining if their Messages app is in landscape left or landscape right? I've seen this topic come up in other discussions, but have not seen a resolution. Since Messages Extensions do support use of the camera and AVFoundation, I've been unable to set my video orientation as I'd usually use UIDevice.current.orientation to determine the orientation. Messages Extensions consistently report an unknown orientation, rather than Face Up, Face Down, Portrait, Landscape Left, Landscape Right, etc.I've been able to use a helpful suggestion of someone here to determine portrait vs. landscape by checking the following in my viewDidLayoutSubviews();if UIScreen.main.bounds.size.width < UIScreen.main.bounds.size.height { // Portrait } else { // Landscape }This, however, results in things working well in portrait, but can result in rotated or upside down images in landscape since I cannot set landscape left or landscape right (or portrait upside down on iPad, for that matter).Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
861 Views
While a bit new to keyboard shortcuts, I am looking to add a specific piece of functionality to my app. Specifically, I am wanting to allow my user to be able to trigger an action by pressing the spacebar, both on iPadOS, when using a keyboard, and macOS. This would function similarly to how video editing programs like iMovie and Final Cut Pro work. I have a "play" button in place, and am trying to add a modifier, like so; Button(action: { 	 self.isPlaying.toggle() }) { 	 Image(systemName: isPlaying ? "pause.fill" : "play.fill") }.keyboardShortcut(.space) .help("Play timeline") Based on the KeyboardShortcut documentation - https://developer.apple.com/documentation/swiftui/keyboardshortcut, this should be all I need to get things running. However, when building and testing my app, using the spacebar does not do anything (nor does the shortcut appear in the keyboard shortcuts list).
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.7k Views
I am trying to build my first Widget, and am following the guidance in the Making a Configurable Widget - https://developer.apple.com/documentation/widgetkit/making-a-configurable-widget article. After confirming the default Widget runs successfully, I am trying to set up my Intent Definition and Intent Handler. I have taken the following steps; Created a new intent definition file, with the custom intent's category as View, eligibility for Widgets, and the parameter set to a custom type while selecting Options are provided dynamically. Created a new Intent Handler target, and set the Supported Intent's class name to something relevant, such as SelectCharacterIntent. The article implies that the newly created IntentHandler.swift file, which has an IntentHandler class, should be able to have that class extended to the intent definition file, as noted; Based on the custom intent definition file, Xcode generates a protocol, SelectCharacterIntentHandling, that the handler must conform to. Add this conformance to the declaration of the IntentHandler class. However, my project immediately reports that it Cannot find type 'SelectCharacterIntentHandling' in scope. I am unsure if I am doing something wrong, but it seems peculiar that the SelectCharacterIntentHandling protocol is created/implemented without me being aware. Surely, there must be a step to take to have that protocol created so I can extend my IntentHandler class to support my dynamic intent. Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
322 Views
With the availability of tracking AppClipCodeAnchor in ARKit on iOS/iPadOS 14.3+, I'm curious if there is a way to determine the rotation (or more specifically, the "angle") at which the App Clip Code is detected. For example, an App Clip Code could appear on a business card, which a user might have laying flat on a table (therefore at a 0° angle). In another case, an App Clip Code could be printed and mounted to a wall, such as in a museum or a restaurant (therefore at a 90° angle). Anchoring AR experiences (especially ones built in Reality Composer) to the detected AppClipCodeAnchor results in a strange behavior when the App Clip Code is anything other than 0°, as the content appears "tethered" to the real-world App Clip Code, and therefore appears unexpectedly rotated without manually transforming the rotation of the 3D content. When I print the details of the AppClipCodeAnchor, once detected in my ARKit session, I can see that a human-readable descriptor for the "angle" of the detected code is available. However, I can't seem to figure out how to determine this property from the AppClipCodeAnchor's transform. Is there an easy way to rotate 3D content to match the rotation of the scanned App Clip Code?
Posted Last updated
.
Post not yet marked as solved
3 Replies
964 Views
Much of this question is adapted from the idea of building a SCNGeometry from an ARMeshGeometry, as indicated in this - https://developer.apple.com/forums/thread/130599?answerId=414671022#414671022 very helpful post by @gchiste. In my app, I am creating a SCNScene with my scanned ARMeshGeometry built as SCNGeometry, and would like to apply a "texture" to the scene, replicating what the camera saw as each mesh was built. The end goal is to create a 3D model somewhat representative of the scanned environment. My understanding of texturing (and UV maps) is quite limited, but my general thought is that I would need to create texture coordinates for each mesh, then sample the ARFrame's capturedImage to apply to the mesh. Is there any particular documentation or general guidance one might be able to provide to create such an output?
Posted Last updated
.
Post not yet marked as solved
2 Replies
723 Views
I am currently working with RealityKit to load a USDZ model from my application's bundle. My model is being added like so; var modelLoading: Cancellable? modelLoading = Entity.loadAsync(named: name) .receive(on: RunLoop.main) .sink(receiveCompletion: { (completion) in modelLoading?.cancel() }, receiveValue: { (model) in model.setScale(SIMD3(repeating: 5.0), relativeTo: nil) let parentEntity = ModelEntity() parentEntity.addChild(model) let entityBounds = model.visualBounds(relativeTo: parentEntity) parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)]) self.arView.installGestures(for: parentEntity) 		let anchor = AnchorEntity(plane: .horizontal) 		anchor.addChild(aparentEntity) 		arView.scene.addAnchor(anchor) }) When my model is added to the scene, which works as expected, I notice that the model has no "ground shadows." This varies in comparison to viewing this model via AR Quick Look, as well as when loading a Reality Composer project (.rcproject), which seems to automatically add grounding shadows. While I have done some research into PointLight, DirectionalLight, and SpotLight entities, I am quite a novice at 3D modeling, and just only seek to add a shadow just below the object, to give it a more realistic appearance on tables. Is there a methodology for achieving this?
Posted Last updated
.
Post not yet marked as solved
2 Replies
523 Views
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
I am trying to follow the guidance for testing a Local Experience, as listed in the Testing Your App Clip’s Launch Experience - https://developer.apple.com/documentation/app_clips/testing_your_app_clip_s_launch_experience documentation. I have successfully created my App Clip target, and can confirm that running the App Clip on my device does launch the App Clip app as I expected. Further, I can successfully test the App Clip on device, by setting the _XCAppClipURL argument in the App Clip's scheme. I would like to test a Local Experience. The documentation states that for testing Local Experiences; To test your app clip’s invocation with a local experience, you don’t need to add the Associated Domains Entitlement, make changes to the Apple App Site Association file on your web server, or create an app clip experience for testing in TestFlight. Therefore, I should be able to configure a Local Experience with any desired domain in Settings -> Developer -> Local Experience, generate a QR code or NFC tag with that same URL, and the App Clip experience should appear. I have taken the following steps; Built and run my App Clip on my local device. In Settings -> Developer -> Local Experience, I have registered a new experience using a URL prefix https://somewebsite.com Set my Bundle ID to com.mycompany.myapp.Clip, which exactly matches the Bundle Identifier, as listed in Xcode, under my App Clip target. Generated a QR code which directs me to https://somewebsite.com In theory, I believe I should be able to open the Camera app on my device, point the camera at the QR code, and the App Clip experience should appear. However, I received mixed experiences. 50% of the time, I receive a pop-up directing me to open https://somewebsite.com in Safari, the other 50% of the time, no banner or action occurs whatsoever. Is this an issue anyone has faced before, or have I pursued these steps out of order?
Posted Last updated
.
Post not yet marked as solved
0 Replies
367 Views
I am a bit confused on the proper usage of GeometryReader. For example, I have a SwiftUI View, like so; 	 var body: some View {         VStack {             Text("Hello, World!")                 .background(Color.red)             Text("More Text")                 .background(Color.blue)         }     } This positions my VStack perfectly in the middle of the device, both horizontally and vertically. At some point, I may need to know the width of the View's frame, and therefore, want to implement a GeometryReader; var body: some View {         GeometryReader { geometry in             VStack {                 Text("Hello, World!")                     .background(Color.red)                 Text("More Text")                     .background(Color.blue)             }         }     } While I now have access to the View's frame using the GeometryProxy, my VStack is now moved to the top left corner of the device. Why is this? Subsequently, is there any way to get the size of the View without having the layout altered?
Posted Last updated
.