Post not yet marked as solved
I am just starting to learn AR. Thanks for the help.
I am trying to bind large objects to a certain location in an open area. I tried to bind using an image, an object in a reality composer. After snapping, when moving, objects do not remain in the same place. ARGeoTrackingConfiguration is not available in my region. If you scan the world around you and then define it, then with a rainy day or the slightest change in the area (for example, mowing the lawn), the terrain will not be determined. What do you advise?
Post not yet marked as solved
Hello!
I want to build an app that lets devices with the LiDAR Scanner scan their environment and share their scans with one another. As of now, I can create the mesh using the LiDAR Scanner and export it as an OBJ file.
However, I would like the ability to map textures and colors onto this model. How would one go on to get the real world texture and place it onto the OBJ model?
Thank you!
Post not yet marked as solved
Hi
I'm exporting an object from Blender to an FBX (or GLTF/GLB) file to then convert it in Reality Converter.
Inside there, everything looks good, all animations work and the object is positioned on 0 0 0, however when I view the exported USDZ file in Safari on my iPhone, the object does not align to the horisontal plane but floats in the air, yet in the Reality Composer, the object is aligned perfectly with the floor.
How can you set the correct alignment position for the object, either in Blender, Reality Composer or Reality Converter?
Thanks
Does RealityKit have an API to test if a ModelEntity (or its CollisionComponent) is currently visible on the screen?
During testing of my app the frames per second -- shown either in the Xcode debug navigator or ARView .showStatistics -- sometimes drops by half and stays down there.
This low FPS will continue even when I kill the app completely and restart.
However, after giving my phone a break, the fps returns to 60 fps.
Does ARKit automatically throttle down FPS when the device gets too hot?
If so, is there a signal my program can catch from ARKit or the OS that can tell me this is happening?
I am an architectural draftsman and I am developing an AR app so clients can view architectural designs. It will primarily be used by prefab house manufacturer's clients so they can visualize these houses on their property. I have hired a remote developer to help me develop some tricky things in Unity and currently he has the app running on his account and is using TestFlight so I can try it on my phone. My question is how should we go about deploying the app once its done? Is there a way he can deploy it under my business name? I don't want to give him my Apple passwords. Or should I ask him to send me a certain file so I can deploy it? Or maybe I should create a repository and he just pulls and pushes to that and I compile in Xcode? If I sound inexperienced it's because I am. I've built and managed to get one app onto TestFlight before so please pardon my naivety. I just want to make sure I set this up the right way. Is there any way he might suggest to do this that I should avoid? I'm assuming he may suggest to deploy the app with his own account which I would think I'd want to avoid right? Thanks in advance.
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis.
A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture?
In terms of creating 3D models, is one method better/more accurate than another?
Post not yet marked as solved
We are able generate 3D mesh model, but it appears white , as we didn't get texture files in .mtl file. We found way to generate texture model from set of images at the below links
https://developer.apple.com/documentation/realitykit/creating_3d_objects_from_photographs
LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model
However photogrammetry(object capture API) it works on MAC, we want to achieve this into iPhone and iPad.
We could see this happening in "3D scanner app" and "Polycam app".
Please suggest how we can resolve this.
Thanks in Advance.
Post not yet marked as solved
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope"
I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit".
I'm trying to run the HelloPhotogrammetry code provided by Apple.
Hi all,
We have a working model with transparant front (to simulate glass), this successfully reflects like the surrounding border, while being transparant. However, there seem to be some glitches on various angles. There are some objects behind the glass that disappear when viewing it from the sides. Also when viewed from a low angle. They reappear whenever the AR view gets in front of the object again.
Post not yet marked as solved
Hello there!
After update of iOS15 quick-look does not work properly with SafariServices that we use on our native web.
For example; if you open this link on twitter https://developer.apple.com/augmented-reality/quick-look/
just click model to use quick-look, you will see AR part is not working..
Post not yet marked as solved
Hi!
I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/.
Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+?
Thanks!
Post not yet marked as solved
I want to add AR lenses and filters (like on Snapchat and Instagram) to the camera feature on my app. Is there a way to do this using ARKit? Thanks!
Post not yet marked as solved
I try to run the game in xcode simulator but it always crashes. HELP pls...
on android works fine.
Warning: Error creating LLDB target at path '/Users/vylegalovi/Library/Developer/Xcode/DerivedData/Unity-iPhone-efppyotdzukdtyampzmbagbszvbv/Build/Products/Release-iphonesimulator/VRMetroEscape.app'- using an empty LLDB target which can cause slow memory reads from remote devices.
W0929 23:53:04.273332 1 commandlineflags.cc:1311] Ignoring RegisterValidateFunction() for flag pointer 0x12ffae050: no flag found at that address
CrashReporter: initialized
2021-09-29 23:53:04.485 VRMetroEscape[25809:2206250] Built from '2019.3/staging' branch, Version '2019.3.0f6 (27ab2135bccf)', Build type 'Development', Scripting Backend 'il2cpp'
-> applicationDidFinishLaunching()
PlayerConnection initialized from /Users/vylegalovi/Library/Developer/CoreSimulator/Devices/D4E84443-11FF-4B7D-8EBD-01895B9BA6B1/data/Containers/Bundle/Application/F8344320-D856-41B5-8AC5-639E39AC54AB/VRMetroEscape.app/Data (debug = 0)
PlayerConnection initialized network socket : 0.0.0.0 55000
Multi-casting "[IP] 192.168.201.3 [Port] 55000 [Flags] 2 [Guid] 2011080506 [EditorId] 0 [Version] 1048832 [Id] iPhonePlayer(David-MacBook-Air.local):56000 [Debug] 0 [PackageName] iPhonePlayer [ProjectName] <no name>" to [225.0.0.222:54997]...
Started listening to [0.0.0.0:55000]
PlayerConnection already initialized - listening to [0.0.0.0:55000]
2021-09-29 23:53:04.816017+0200 VRMetroEscape[25809:2206250] Cannot find executable for CFBundle 0x7fc14c4bbee0 </System/Library/Frameworks/Metal.framework> (not loaded)
2021-09-29 23:53:04.971547+0200 VRMetroEscape[25809:2206250] Cannot find executable for CFBundle 0x7fc14df51ab0 </System/Library/Frameworks/QuartzCore.framework> (not loaded)
-> applicationDidBecomeActive()
[Subsystems] Discovering subsystems at path /Users/vylegalovi/Library/Developer/CoreSimulator/Devices/D4E84443-11FF-4B7D-8EBD-01895B9BA6B1/data/Containers/Bundle/Application/F8344320-D856-41B5-8AC5-639E39AC54AB/VRMetroEscape.app/Data/UnitySubsystems
GfxDevice: creating device client; threaded=1
gfx device intialization failed
Post not yet marked as solved
is there any way to detect data races using ARKit?
it only runs on simulator
thank you!
Post not yet marked as solved
Hello,
in our app we are downloading some user generated content (.reality files and USDZs) and displaying it within the app.
This worked without issues in iOS 14 but with iOS 15 (release version) there have been a lot of issues with certain .reality files. As far as I can see USDZ files still work.
I've created a little test project and the error message log is not really helpful.
2021-10-01 19:42:30.207645+0100 RealityKitAssetTest-iOS15[3239:827718] [Assets] Failed to load asset of type 'RealityFileAsset', error:Could not find archive entry named assets/Scéna17_9dfa3d0.compiledscene.
2021-10-01 19:42:30.208097+0100 RealityKitAssetTest-iOS15[3239:827598] [Assets] Failed to load asset path '#18094855536753608259'
2021-10-01 19:42:30.208117+0100 RealityKitAssetTest-iOS15[3239:827598] [Assets] AssetLoadRequest failed because asset failed to load '#18094855536753608259'
2021-10-01 19:42:30.307040+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307608+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307712+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307753+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307790+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307907+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307955+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.308155+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.308194+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
▿ Failed to load loadRequest.
- generic: "Failed to load loadRequest."
Basic code structure that is used for loading:
cancellable = Entity.loadAsync(named: entityName, in: .main)
.sink { completion in
switch completion {
case .failure(let error):
dump(error)
print("Done")
case .finished:
print("Finished loading")
}
} receiveValue: { entity in
print("Entity: \(entity)")
}
Is there anyway to force it to load in a mode that enforces compatibility?
As mentioned this only happens on iOS 15. Even ARQuickLook can't display the files anymore (no issues on iOS 14).
Thanks for any help!
Post not yet marked as solved
Hello,
I downloaded the Pixar kitchen scene from Pixar and opened the rug asset as a USD file. It's about 26k polygons ( unsubdivided mesh ) and only 352kb in size.
Converting the rug to a Scenekit or OBJ file increases the file size to 3.7MB! About 10x larger !
How did Pixar manage to optimize/ export a 26k polygon model with a size of 352kb? Is this only possible using their Presto proprietary software ?
Is there special settings we need to used to export models created in Maya or 3ds Max with the same file size optimization? The Rug model is 3.7mb when exported as a USD file from 3ds Max.
We'd like to add some simple code to our .reality file but I'm thinking that in order to do that, it needs to become an app.
Is this true or can one add code into a Reality Composer scene and then export .reality?
Thanks!
Post not yet marked as solved
How can you run a full-screen compute pass on the final rendered frame of RealityKit to add, for example, a glowing effect to some objects, or, say, apply a sepia filter?
This is possible in SceneKit with SCNTechnique, but I can't find the equivalent on RealityKit.
Post not yet marked as solved
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took.
Using the HelloPhotogrammetry command line, I created the meshes without any problems.
I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great!
BUT, when I open the meshes in Maya, for example, they are really really tiny!
I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres.
Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale?
Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?