Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Run to Device( on WLAN)?
Is it possible to use a local wifi router connecting Vision Pro and Mac for developing? I tried from Unity and Xcode. From Unity, the host app wouldn't open without WIFI (internet connection) From Xcode, I can see the Vision Pro paired, but while try to run there's no device listed. Any suggestions? Thanks a lot, /Ruiying
2
0
165
Jan ’25
ImpulseAction giving strange error
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this? import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") { sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)])) sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000)) sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false sphere.position = [0, 1, -1] content.add(immersiveContentEntity) // Create an action to apply an impulse, forcing the object to move upwards. let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0]) // Create a small positive duration value. let duration: TimeInterval = 1 / 30.0 // Create an animation for the action, which will start playing // after five seconds. do { let impulseAnimation = try AnimationResource .makeActionAnimation(for: impulseAction, duration: duration, delay: 5.0) // Play the sequence animation that will play the actions. sphere.playAnimation(impulseAnimation) } catch { print("Error: \(error)") } } } } } All the logs: Could not locate file 'default-binaryarchive.metallib' in bundle. Error creating the CFMessagePort needed to communicate with PPT. AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46 cannot add handler to 0 from 1 - dropping nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket] nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket] Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. cannot add handler to 0 from 1 - dropping Cannot find a BindPoint for any bind path: "", "" Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]). But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
2
0
325
Jan ’25
Ground shadow and visibility
Hey, I'm building an interior design app In Vision OS 2.0. I'm fetching the planes detected by ARKit and I then proceed to add them with an "OcclusionMaterial" to make sure my object are occluded accordingly. However, I'm facing two problems with this: The ground shadows are completely disabled as soon as an occlusion material is added, even if I inset the planes doing the occlusion. I've looked into this: https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) but when I tried to use it, it behaved exactly as "OcclusionMaterial". The planes are also occluding all windows (mines and the system ones), which is a behavior I'd like to avoid. I only want to occluded the Entity I added. Is there a way to achieve this? Thanks in advance
2
1
324
Dec ’24
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
289
Jan ’25
[Vision, visionOS] Is it possible using Vision Framework on visionOS for body tracking feature?
Hello, I checked following documentations. Vision | Apple Developer Documentation Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer I saw Vision Framework is available on visionOS. So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
1
0
319
Dec ’24
I want to know the update principle of the `RealityView`.
Here is the code snippets. struct RealityViewTestView: View { @State private var texts: [String] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in for text in texts { if let textEntity = attachments.entity(for: text) { textEntity.position.x = Float.random(in: -0.1...0.1) content.add(textEntity) } } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { texts.append(String(UUID().uuidString.prefix(6))) } } ToolbarItem { Button("Remove") { texts.remove(at: Int.random(in: 0..<texts.count)) } } } } } struct RealityViewTestView: View { @State private var texts: [String] = [] @State private var entities: [Entity] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in // for text in texts { // if let textEntity = attachments.entity(for: text) { // textEntity.position.x = Float.random(in: -0.1...0.1) // content.add(textEntity) // } // } for entity in entities { content.add(entity) } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { //texts.append(String(UUID().uuidString.prefix(6))) let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)]) m.position.x = Float.random(in: -0.2...0.2) entities.append(m) } } ToolbarItem { Button("Remove") { //texts.remove(at: Int.random(in: 0..<texts.count)) entities.removeLast() } } } } } About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
1
0
269
Dec ’24
Export Camera Position from object capture API
Hi, I would like to train Gaussian splats from my object captures. So I need a pointcloud and camera positions together with the original photos taken to train GS In an app like postShot. I could do this with Reality Capture, which supports exporting pointclouds and camera position but it does not do well with turntable photogrammetry. While the Apple object capture API does produce really solid results with turntable images. so my question is, can I export camera data from my object captures to use in another application? Or is there may be a plan to at this feature in the future? It would be really helpful in creating ultra realistic, 3-D objects in Gaussian splat format. Thanks for any isuggestions…
0
1
251
Dec ’24
Test Vision Pro App on Vision Pro
This might be a very silly question, anyway I tried many ways and didn't find a solution: My Mac mini M4 is basic version which only has 16GB memory, it is very shy for developing Vision Pro application and testing with the simulator (CleanMacX always warning me low memory), I want to debug and test the application directly on Vision Pro (+ my app need both two hands gestures which simulator might not support) in stead of simulator, is there any proper instructions on how I test/debug/run the App on VP device directly instead of on simulator in favor of speed?
1
0
254
Dec ’24
How to search location in global rather than in local?
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code: @Observable class SearchPlaceManager: NSObject { var searchText: String = "" let searchCompleter = MKLocalSearchCompleter() var searchResults: [MKLocalSearchCompletion] = [] override init() { super.init() searchCompleter.resultTypes = .address searchCompleter.delegate = self } @MainActor func seachLocation() { if !searchText.isEmpty { searchCompleter.queryFragment = searchText } } } extension SearchPlaceManager: MKLocalSearchCompleterDelegate { func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) { withAnimation { self.searchResults = completer.results } } } Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
2
0
499
Dec ’24
Disable Object Occlusion on iOS
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content. This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView. Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
1
0
337
Dec ’24
Immersive Space not working
if I set UIApplicationPreferredDefaultSceneSessionRole to UISceneSessionRoleImmersiveSpaceApplication then my Immersive Space for image is working fine but when I try with UIWindowSceneSessionRoleApplication this option and try to open Immersive space on particular sub screen then its not showing image in immersive space(Immersive space not open). Any one have idea what the issue. &lt;key&gt;UIApplicationSceneManifest&lt;/key&gt; &lt;dict&gt; &lt;key&gt;UIApplicationPreferredDefaultSceneSessionRole&lt;/key&gt; &lt;string&gt;UIWindowSceneSessionRoleApplication&lt;/string&gt; &lt;key&gt;UIApplicationSupportsMultipleScenes&lt;/key&gt; &lt;true/&gt; &lt;key&gt;UISceneConfigurations&lt;/key&gt; &lt;dict&gt; &lt;key&gt;UISceneSessionRoleImmersiveSpaceApplication&lt;/key&gt; &lt;array&gt; &lt;dict&gt; &lt;key&gt;UISceneInitialImmersionStyle&lt;/key&gt; &lt;string&gt;UIImmersionStyleFull&lt;/string&gt; &lt;/dict&gt; &lt;/array&gt; &lt;/dict&gt; &lt;/dict&gt; My info.plist value as above
1
0
406
Dec ’24
Ground Shadows pass through objects
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below. This is a screen capture in composer pro but the same thing happens in the Vision Pro. Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
2
0
387
Dec ’24
How to access the Scene instance of a visionOS app using immersive space in a unit test?
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits. I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast. But how can I access the scene instance? I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test. I tried the following test function: @MainActor @Test func test() async throws { var scene: RealityKit.Scene? await withCheckedContinuation { continuation in _ = RealityView(make: { content in print("make") let entity = Entity() content.add(entity) scene = entity.scene continuation.resume() }) } #expect(scene != nil) } But this logs ◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation! The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View. So, is it possible at all to access the app's scene i a unit test?
0
0
289
Dec ’24