Post not yet marked as solved
Currently, I have a requirement to use models created in Unity for use in Reality Kit, so I need to convert the models to USDZ formats. I used this approach (How to easily create AR content for iPhone using Unity), but the result was not as expected. Converted models do not display correctly, and animation on objects does not appear in their converted files. It was also noticed that objects made using the Unity particle system (e.g., confetti) were not converted using this approach.
I also tried to convert by selecting the ‘Export selected as USDZ’ menu from Unity’s main menu bar, but nothing worked. So is there any effective way to convert the unity models, including the particle systems, to USDZ?
Post not yet marked as solved
Hi everybody,
I am an Engineering Student and at the University we have to create a little
AR-App.
Now, in Xcode I want to make an Image Tracking and above that Image, it should show my 3D Object. I followed this Video: "https://www.youtube.com/watch?v=VmPHE8M2GZI" until the minute 39:18. After that, it doesnt work.
The simulator detects the Image and shows a light grey Plane above it, even if I move around. But the 3D Model doesn't show up.
I imported the ns.obj file in art.scnassets
Converted to SceneKit file .scn
changed the texture "diffusion" to green
I tried to scale it, but still no result
I tried also with an 3D Object downloaded from the internet
Long Story short.... it doesn't work. Does anyone knows what the Problem could be?
Thank you very much.
Greetings, Rosario
PS:
I use the Xcode Version 14.3.
Thats my code in the ViewController.swift file:
import SwiftUI
import RealityKit
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
var nsNode: SCNNode?
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.autoenablesDefaultLighting = true
let nsScene = SCNScene(named: "art.scnassets/ns.scn")
nsNode = nsScene?.rootNode
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARImageTrackingConfiguration()
if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main) {
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 2
}
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let size = imageAnchor.referenceImage.physicalSize
let plane = SCNPlane(width: size.width, height: size.height)
plane.firstMaterial?.diffuse.contents = UIColor.white.withAlphaComponent(0.5)
plane.cornerRadius = 0.005
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
if let shapeNode = nsNode {
node.addChildNode(shapeNode)
}
}
return node
}
}
Post not yet marked as solved
I am trying to make a simple 2d overlay for the FacialAnchor Mesh. I am unsure how to get my graphic to line up to what the mesh is showing. Is there a template image I should use to paint then apply that in xcode? Any tutorials or links to get me in the right direction would be much appreciated.
Post not yet marked as solved
Hi Apple! I want to develop apps for Reality Pro - how do I get started? Do I need a reality pro to run the app, or is emulation possible? What APIs will we have access to? When can I get started?
I have an AR app and soon will be making a VisionKit version. Can we use swift library to create a world space 3D UI with features like Zstack, Buttons, etc?
Currently, I am using UI kit to do that.
Post not yet marked as solved
I am creating an AR app that uses ARKit and SceneKit.
Will visionOS support SceneKit AR applications, or will I have to rewrite my app using RealityKit?
Thanks for answering my question.
: - )
I would like to add 3D interactive UI, reality kit models, interactivity using touch gestures, etc on my AR app for mobile. how can I use Reality Kit view, model view, 3d swift ui, etc. on my moble AR app?
Post not yet marked as solved
I’m wondering if it’s possible to use SceneKit or GamePlayKit to handle physics etc and having SwiftUI to create the 3D content?
Post not yet marked as solved
Interested to start playing with this stuff, but yesterday's Xcode 15 beta doesn't seem to include it. Is it available to devs now?
Post not yet marked as solved
One thing that was not very clear for me on the WWDC videos regarding VisionOS app development was:
If I want to trigger an action (let's say change the scene) using the user's relative position to do so, am I going to be able to do it?
Example: If the user comes too close to an object, it starts to play some animation.
Reference video:
wwdc2023-10080
Post not yet marked as solved
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Post not yet marked as solved
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit:
https://codingxr.com/articles/shadows-lights-in-realitykit/
Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out.
On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible?
Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
I have been reading through the documentation and can not find a way to alter the users environment lighting. Is this not possible?
Basically I would like to add a darkening to a room. Or change the HUE of the environment in the scene they are seeing. I can think of a few "hacks" to do this but figured there would be a fancy reality kit way to do so.
If it is possible to "dim" or darken the environment I could then light up my models with lights but still have the real environment all around.
Post not yet marked as solved
Is there a particle system in RealityKit? if so can some one point me to the correct documentation/articles ?
Post not yet marked as solved
Hello,
In my app I'm trying to delete all but one chosen plane and do some raycasting on that plane. I noticed that, whenever I tried to delete other planes they would instantly reappear. Here is some sample code of the ARViewController I'm using that demonstrates the problem
class ARViewController: UIViewController {
var arView: ARView!
*** Bunch of stuff ***
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
// Iterate through the detected anchors
for anchor in anchors {
// Check if the detected anchor is an ARPlaneAnchor
if let planeAnchor = anchor as? ARPlaneAnchor {
plane_count += 1
print("Plane added. Number of plane anchors = \(plane_count)")
}
}
}
func session(_ session: ARSession, didRemove anchors: [ARAnchor]) {
for anchor in anchors {
if let planeAnchor = anchor as? ARPlaneAnchor {
plane_count -= 1
print("SESSION CALLED: Plane Removed. Number of plane anchors = \(plane_count)")
}
}
}
func deletePlanes(){
for anchor in arView.session.currentFrame?.anchors ?? [] {
arView.session.remove(anchor: anchor)
}
}
When deletePlanes() is called, I'll see the following output populate instantly
SESSION CALLED: Plane Removed. Number of plane anchors = 2
SESSION CALLED: Plane Removed. Number of plane anchors = 1
SESSION CALLED: Plane Removed. Number of plane anchors = 0
Plane added. Number of plane anchors = 1
Plane added. Number of plane anchors = 2
Plane added. Number of plane anchors = 3
This even occurs when the phone is face down after detecting a few planes. It appears that the planes are not actually being removed from the session.
Please let me know if I'm doing anything wrong here! Thanks.
Post not yet marked as solved
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session.
I have two questions regarding this API.
The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct?
I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession?
Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.
Post not yet marked as solved
Hi,
Suppose I have a table model imported from usdz with initially something like a "wood" material. When clicking on a button, I want the table to take a "marble" material.
I only know how I can load the entity along with its assigned material from a usd file, but I want to load all the different materials, store them somewhere and be able to assign the materials dynamically when a button is clicked. Is there a way to do that?
Thanks
Post not yet marked as solved
Hi i am student from London studying app development in Arizona. I have a few ideas for apps that I believe would add to the Apple AR experience. I was wondering where I should go about getting started in the development process.
Any Guidance would be much appreciated :)
Post not yet marked as solved
Or do I need to test with the Apple Vision Pro on device? Wondering if I could just use my iPhone's lidar + camera sensor to recreate the test.