Hello!
When I expand the AnchorEntity in the following way, I see a shadow falling from the object:
let anchorEntity = AnchorEntity (plane: .any)
anchorEntity.addChild (parentEntity)
but in my scenario it is necessary to work with ARAnchors in the session (I am implementing a multi-user session and it is necessary to operate on CollaborationData, which, as I understand it, is currently impossible without binding to ARAnchors) and I am trying to write the code like this:
let anchorEntity = AnchorEntity (plane: .any)
anchorEntity.anchoring = AnchoringComponent (anchor)
anchorEntity.addChild (parentEntity)
or easier
let anchorEntity = AnchorEntity (anchor: anchor)
anchorEntity.addChild (parentEntity)
then there is no shadow from the placed object.
I figured that the bottom of my object (AnchorEntity) could be placed below the level of the detected flatness and tried to add a few centimeters on the Y axis when calculating the placement point:
let newTranslate = SIMD4 <Float> (x: transform.columns.3.x, y: transform.columns.3.y + 0.10, z: transform.columns.3.z, w: transform.columns.3.w)
let anchorTransform = simd_float4x4 (columns: (transform.columns.0, transform.columns.1, transform.columns.2, newTranslate))
let anchor = ARAnchor(name: arAnchorName, transform: anchorTransform)
self.arSession.add(anchor: anchor)
This actually lifts my model up 10 centimeters when placed, but there is still no shadow.
Thanks for any help!
Post not yet marked as solved
Is there a way to access the SDK code of the measure app in our iPhone, iPad, or iPod touch?
Thank you!
Is there an equivalent to MultipeerConnectivityService that implements SynchronizationService over TCP/IP connections?
I'd like to have two users in separate locations, each with a local ARAnchor but then have a synchronized RealityKit scene graph attached to their separate ARAnchors.
Is this possible?
Thanks,
Post not yet marked as solved
Hello, I am preparing for WWDC2021, but I encountered a small problem. I don’t know how to use RealityKit to create a facial geometry, just like using SCNKit.
SCNKit:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) - SCNNode? {
let faceMesh = ARSCNFaceGeometry(device: sceneView.device!)
let node = SCNNode(geometry: faceMesh)
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
Post not yet marked as solved
On my MacBook I see a process called RAQLThumbnailExtension that is using 7GB RAM (!) It is part of RealityKit:
/System/Library/Frameworks/RealityKit.framework/PlugIns/RAQLThumbnailExtension.appex/Contents/MacOS/RAQLThumbnailExtension
Doing a sample shows a stack trace with the following interesting line: specialized static AssetLoader.importUSDZFile(info:completion:) (in RAQLThumbnailExtension)
and within that call there are calls to:
re::AssetManager::createAssetEntryForNamedAsset
re::MeshAssetLoader::createRuntimeData
re::createMeshCollectionFromMeshAsset
re::mtl::Device::makeBuffer
(full stack is very deep so I won't share it all here.)
The Open Files view in Activity Monitor shows that this process has opened up a USDZ file that I downloaded some time ago. If I kill the process, it starts up again, reloads the file and goes back to using 7GB RAM on my machine.
My guess is that this process is trying to be helpful and render thumbnails of any USD files on the machine, but in this case it is getting bogged down and unable to complete the render for this file. It is a large USD file of a very large 3d scene, so I'm guessing it just can't handle it and is stuck.
I am curious if there is a way to turn off thumbnail generation of USD files. This seems like something I can live without, especially if I can get 7GB of RAM back in return.
Also in general it doesn't seem like autogeneration of thumbnails of USD scenes is a great idea given how large and complex they can be.
Hi,
I'm using the sample code to create a 3D object from photos using PhotogrammetrySession but it returns this error:
Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)")
Sample code I've used is this and this.
Any idea?
Thanks in advance!
Post not yet marked as solved
When I use a DirectionalLight with a Shadow component, I often see vibrating graphic artifacts, from wiggling at the edges to distracting herringbone patterns across flat surfaces.
To a lesser degree I also see similar artifacts along the edges of straight objects alongside ground shadows when .receivesLighting is set in sceneUnderstanding.
I discussed these with Michael in our RealityKit lab earlier. I understand this might be "shadow acne" and the shadow's depthBias parameter may effect it.
Can we have an overview of these issues and an approach to adjusting parameters to eliminate it?
Post not yet marked as solved
I've got a quick question regarding ARKits scene reconstruction. Is it possible to get the world coordinates for the faces/vertices that are part of the generated mesh or to select them individually?
After looking through the documentation at apple and tinkering with the example apps it does not seem possible working with the faces property of ARMeshGeometry, but the vertices property does return coordinates. Here's apples code-snippet on how to select specific vertices:
func vertex(at index: UInt32) -> SIMD3<Float> {
assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.")
let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index)))
let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee
return vertex
}
}
I've tried to place objects at those coordinates to see what they refer to, but they somehow end up in the middle of the room, far away from the mesh.. leaving me a bit confused as to what the vertices coordinates actually refer to.
I'd appreciate any answers on how to approach this!
Post not yet marked as solved
Hello, when I use Photogrammetry, I receive this error:
Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -11)") .
My specs are listed below. I think My macBook Pro should be supported. Please let me know if I missed something or if there is a fix for the issue.
Model Name: MacBook Pro (2019)
Model Identifier: MacBookPro15,2
Processor Name: Quad-Core Intel Core i7
Processor Speed: 2.8 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 8 MB
Hyper-Threading Technology: Enabled
Memory: 16 GB
Post not yet marked as solved
I encountered this error while using the PhotogrammetrySample object adding gravity and depth map to the sample.
Is there any more information in what an SfM map is or how to fix this?
I used the Hello Photogrammetry sample project as a base. Here's the output:
[Photogrammetry] No SfM map found in native output!
2021-06-27 13:25:59.683504-0400 HelloPhotogrammetry[30142:968357] [Photogrammetry] Got error in completion: reconstructionFailed(RealityFoundation.PhotogrammetrySession.Request.modelFile(url: file:///Users/mattei/Desktop/Photogrammetry/Exports/Mic2.usdz, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.medium, geometry: nil), "Reconstruction failed!")
Thanks so much!
Post not yet marked as solved
I'm not sure what happened, I'm pretty sure I wasn't having issues importing these file types a few weeks ago. Just to be sure I checked the webpage for Reality Converter and yup, GLTF is listed...
Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd
I haven't updated anything on my computer, so I dunno how anything could have changed. Anyone know what the deal could be?
Post not yet marked as solved
Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow.
Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!
Post not yet marked as solved
Hey,
I have run several tests with masks in the given folder upon PhotogrammetrySession init. It seems the masks are taken into account as the results differ from when I don't provide them.
Unfortunately, the results aren't as good as we can expect when masks are provided.
Has anyone been able to make it work? How?
Example of Imagemagik conversion applied and filename:
magick mogrify -monitor -format tif -depth 8 *.png: my original masks are in png format.
IMG_0001_mask.TIF
I created a 3D model using Object Capture.
https://developer.apple.com/videos/play/wwdc2021/10076/
I want to know where the image used to create the object model was taken on the object oriented coordinate.
Can I get this information from the PhotogrammetrySession?
I am experiencing a single video frame glitch when transitioning from one RealityKit Entity animation to another when transitionDuration is non-zero.
This is with the current RealityKit and iOS 14.6 (i.e., not the betas).
Is this a known issue?
Have people succeeded in transitioning from one animation to another with a non-zero transition time and no strange blink?
Background:
I loaded two USDZ models, each with a different animation. One model will be shown, but the AnimationResource from the second model will (at some point) be applied to the first model.
I originally created the models with Adobe's mixamo site (they are characters moving), downloaded the .fbx files, and then converted them to USDZ with Apple's "Reality Converter".
I start the first model (robot) with its animation, then at some point I apply the animation from the second model (nextAnimationToPlay) to the original model (robot).
If the transitionDuration is set to something other than 0, there appears a single video frame glitch (or blink) before the animation transition occurs (that single frame may be the model's original T-pose, but I'm not certain).
robot.playAnimation(nextAnimationToPlay, transitionDuration: 1.0, startsPaused: false)
If transitionDuration is set to 0, there is no glitch, but then I lose the smooth transition.
I have tried variations. For example, setting startPaused to "true", and then calling resume() on the playback controller; also, waiting until the current animation completes before calling the playAnimation() with the next animation. Still, I get the quick blink.
Any suggestions or pointers would be appreciated.
Thanks,
Post not yet marked as solved
I'm using GeoTrackingExample project from Apple and there is a bug in the logic where after you reset the AR Session, it still retains the anchor information in the URL. So the steps are as follows:
start app
tap in the arview and place an object
select Reset AR Session from the menu
tap in the arview to place an object
select save anchor from the menu
save file to device
Reset AR Session
Load the saved file
When the file loads, it will load both the initial object placed in the AR view in step 2 then removed in step 3 but when saving the added object that was placed in step 4 and reloaded, it loads both of the anchors. It should only have saved the object added in step 4, not both objects from step 2 and 4. When stepping though the code to debug, it appears that the the anchors are removed from the ARview but somehow the URL will contain the path for both the objects. What else do I need to do to clear so that objects from previous sessions are not added to the URL in the session? Is the URL modified when the object is added to an ARView? Please help! Thanks
Post not yet marked as solved
Is there a way in RealityKit to get only Entities within the camera frame of view during some sort of update?
Hi,
I have a MacPro, and am looking to buy a Sapphire AMD RX 580 8GB GPU.
(Since my PowerColor R9 280X 3GB is just shy of the minimum 4GB requirement...)
And I'm wondering, what if I bought two RX 580's? Would Object Capture take advantage of a dual gpu setup? ... and if so, would it increase the performance?
PS. Just to clarify - I don't want / not talking about doing a Dual-Link / CrossFire setup (since that practice is kinda "dead"...) ... just wondering if Object Capture would recognise "aaah, there are two identical GPUs in the system, lets use both..."
Post not yet marked as solved
Hi, I'm using the AR Foundation in Unity for an AR app, there are some images as a marker and when I read them the app start the reproduction of an animation.
In Android it works perfect, but in IOS the animation doesn't loop, it's like the iphone just stop catching the marker, because the app play de animation but in some point the animation just stop, and I have to move the phone and look the marker again.
How can I make the animation loop works??.
Thanks.
Post not yet marked as solved
Is there a way to create a 3D reconstruction of a room using Swift or Objective-C without LiDAR ? Any help or recommendations will greatly be appreciated.