Post not yet marked as solved
35
Views
Hey
So here is the situation
There is a room, There is a 2.5m pilar in the centre. On the pillar I want to place a box, with each side having an image of anchor - the same?
I want then to glue my 3d model to that cube centre/anchor locations. I'd like to be able to walk around the pillar in the room & always track the same position for my 3d objects.
How can I do this? As far as I can tell each anchor I ever saw was 1 sided-flat image. What would be the way for a 4 way anchor?
What would be my best bet?
TIA
Regards
Dariusz
Post marked as solved
2.7k
Views
I have seen this question come up a few times here on Apple Developer forums (recently noted here - https://developer.apple.com/forums/thread/655505), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal.
In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so.
From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export.
How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?
Post not yet marked as solved
23
Views
com.apple.arkit.ardisplaylink.0x2832f0840 (37): EXC_BAD_ACCESS (code=2, address=0x1590901710)
ARKit Face Tracking
The above error occurs intermittently within 20 minutes of calling the Arkit Face Tracking.
We are developing it by referring to the link below.
Github
Unity 2021.1.1f1, 2020.3.11f1, 2020.3.2f1, etc
The same error is occurring in all versions.
Below is the link for similar errors.
issuetracker
Post not yet marked as solved
23
Views
Hello - Does somone have an idea how we could use ARKit to determine the user pose from an image or pattern located in the real scene.
The scenario would be to initialize the pose determination when the user scans the pattern (QR code for instance) and then ARKit would track the user motion in regard to the scanned pattern and deduct the new pose parameters. Many Thanks
Post not yet marked as solved
47
Views
Hey
I've been fighting with it for quite a bit now. How can I make a chrome material programatically ?
I'm making 3d editor and I want to control/create materials in app.
No matter what value I give to clear coat/metallic/specular/etc its reflection amount never gets anywhere near chrome needed levels.
TIA
Post not yet marked as solved
73
Views
Hey
How can I get a name of material that is assigned to my USD mesh?
for mat in importedModel.model!.materials {
print(mat)
print(mat as? SimpleMaterial)
}
I'm getting mostly nil when trying to recast it. I want to re-configure all my materials upon import. But I can't do it without knowing what material is what.
Given that 15.0 got nice PBR ones, what would be the correct way to do it?
Post not yet marked as solved
63
Views
Hello, I am creating an app like the ikea furniture app where you can place AR objects in to see how they look. My question is there a way to use the bounding boxes as snap points? What I’m trying to accomplish is having the objects in a line by using their bounding boxes extents as snap points.
Any help would be much appreciated!
Post not yet marked as solved
41
Views
Hey
I'm trying to understand the pbm materials, for the moment I have this test >
var myMaterial = PhysicallyBasedMaterial()
myMaterial.baseColor.tint = UIColor(white: 1.25, alpha: 1.0)
//myMaterial.blending = .transparent(opacity: .init(floatLiteral: 0.01)) /* glass material work! */
myMaterial.clearcoat.scale = .init(floatLiteral: 0.8)
myMaterial.clearcoatRoughness.scale = .init(floatLiteral: 0.0)
myMaterial.faceCulling = .none
myMaterial.metallic = .init(floatLiteral: 0.9)
myMaterial.roughness = .init(floatLiteral: 0.5)
let sheenColor = UIColor(red: 0.5, green: 0.5, blue: 0.5, alpha: 1.0)
myMaterial.sheen = .init(tint: sheenColor)
myMaterial.specular = .init(floatLiteral: 10.0)
As soon as I assign sheen, the entire material changes and I have only "sheen" control. I cant control sheen opacity, or how rough/etc it is. Its like another material on top of PBM... what do I do with it ?
TIA
Post not yet marked as solved
58
Views
Hello
I'm in need of a precise raycast intersection function.
Currently the native ones that realitykit/etc provide are :
generateConvex
generateBox
generateSphere
generateCapsule
But I would like to get... "exact" as in, just take the mesh as is and use it for raycast intersection & give me exact point on mesh. Not approximation.
I know its something that a "game" would NOT want. But for my application I need exact intersection point.
Ideally I would like to either do
view.raycast() or hitTest() but give it a list of meshes I want to raycast against so it does specific test for specific meshes only.
If this can not be provided by native library, then I need to
Get vertex/indices data of a mesh to rebuild my own mesh structure - pain
Get exact camera matrix/etc to create my own ray/cast/intersection test. - more pain
I have to reimplement my own raycast test from 0 essentially.
Would it be possible to add to SDK a shape of exact mesh? I don't want it to generate anything, just take the mesh ash is. I tried doing convexMesh on my model, after 5 min generation time I gave up and stopped the process. So yea... high poly count, say USD with 30 000 objects and up to 40 mil polygons... lets think "BIG" here, like automotive cad data set that was lightly optimised.
Regards
Dariusz
Post not yet marked as solved
556
Views
From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model…
Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction?
(I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)
Post not yet marked as solved
35
Views
It’s very easy to get a rotation value:
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3)
let bodyOrientation = Transform(matrix: bodyAnchor.transform).rotation
let rotationAngleInDregree = bodyOrientation.angle * 180 / .pi
Unfortunately I don’t get a result between 0 an 360°. No matter whether the person turns clockwise or the other way around I get the same values. Somewhere at 200° is a change.
What’s the best way to get a unique value between 0 and 360°?
Post not yet marked as solved
84
Views
I would like to know which api can be used to complete the creation of a 3D scene (e.g. a room), it does not seem to provide some more ways now, looking forward to your answer
Post not yet marked as solved
165
Views
Hi,
I am still working on an app to place simple 3D models on different places outdoor. I save the location (World Data) of the environment nearby and load and reconstruct the scene later on. I make use of the latest Apple device (iPhone 12 Pro) with the LiDAR scanner. Strange thing is that often you can't reconstruct the experience. Is the stored (LiDAR) data to accurate so that the scene has to be exactly the same? For example, could it be a problem if a flower leave was broken, so it's imposible to reconstruct?
In my case (example) I've created two seperated scenes. I placed one arrow model (.usdz) on a flowerpot and one on a statue. I saved both, checked by reloading (model was still there) and came back the next day. It was rainy that day. I couldn't reproduce the AR scene around the flowerpot but the statue was no problem. Is there a way to make the scene simpler to recognize? For example, is it better to add horizontal and vertical plane detection besides the meshes? Or change the way of using the world mapping status? Other solution could be: place more models (arrows), so that one of the Anchors should match.
Thanks in advance,
Marc
Post marked as solved
122
Views
Hi, I'm new to ARKit development. Can anyone suggest ways to perform automatic wall or corner detection with ARKit?