Post not yet marked as solved
I want to add a small banner to my AR model when viewed in QuickLook in my app. I haven't been able to find much information on this however I found this resource where it explains how to add it when you have a model on a website. My model is locally stored on the device so how would I add this?
I have been working on developing an app for iPad Pro 2020 to capture the scene depth from the lidar sensor using ARKit 4 sceneDepth property. It was working completely fine until the iOS got updated today. The sceneDepth is always nil.
Post not yet marked as solved
Hi there,
I have a function that saves and restores ARWorldMap in my App, my operation flow
open my app and scan around
save ARWorldMap to ipad local storage
load to restore ARWorldMap
After repeat step 2 and 3 several times, my App crashed. I thought it was my code issue but cannot find any clue. Then I downloaded official ARWorldMap Pesistent demo from https://developer.apple.com/documentation/arkit/data_management/saving_and_loading_world_data, repeat the operation as above, it also crashed.
My iPad version is iPad Prod 15.3.1, and the error log said dyld4 config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection.dylib blahblah, please check it from attachment. Any help is appreciated.
Post not yet marked as solved
I am thinking of a web application to scan an object using a LiDAR sensor from iPhone. It'll be built using some Javascript-based language preferably using ReactJS.
We want to use this website on the Safari browser only for handling performance. Is there any way to do that?
Post not yet marked as solved
private let boardAnchor: AnchorEntity = AnchorEntity(
*Argument passed to call that takes no arguments
plane: .horizontal, *Cannot infer contextual base in reference to member 'horizontal'
classification: [.floor, .table], *2Reference to member 'floor' cannot be resolved without a contextual type
minimumBounds: SIMD2(0.1, 0.1)
)
how can this be fixed in the demo code of the chess game example capture chess swift presented at WWW2022 session AR experience
any help highly apprishiated
Post not yet marked as solved
I'm having some issues with Reality Composer (the latest v1.5 with the latest Xcode beta) and I just wanted to check if these are known issues.
I have an image of a printed map which I'd like to turn into an AR-based interactive map. Something simple, where I tap a pin on the map, and details about that location move up from beneath the map, then move back out of view. The map, the pin and the location information are all separate flat images, and I'm using a horizontal anchor. Unfortunately, it seems that simple bugs are stopping this from working at all.
Say I set the pin to respond to a Tap. If I use the "Move, Rotate, Scale to" action to move a second object (location info) up, then add a Wait Action, then another "Move, Rotate, Scale to" to move the info object back down to its original position. The result: the info object doesn't initially move up as far as it should, and the second "Move" pushes the info object away and out of sight completely.
If I try another approach, using the Show and Hide actions (using "Move from below" and "Move to below" as the Motion type), and again with a Wait in the middle, it works the first time, but subsequent taps cause the info object to simply appear, with no incoming animation, and then the outgoing animation works correctly.
Is it just something wrong with my system, or is this broken? If I don't try to move objects around (i.e. Show/Hide with "No Motion") then I have more luck, but I'm feeling pretty constrained.
Thanks in advance for all help with this.
Post not yet marked as solved
What is the correct approach to save the image (eg. CVPixelBuffer to png) obtained after calling the captureHighResolutionFrame method?
"ARKit captures pixel buffers in a full-range planar YCbCr format (also known as YUV) format according to the ITU R. 601-4 standard"
Should I change the color space of the image (ycbcr to rgb using Metal)?
Post not yet marked as solved
Is the "new lighting" introduced in WWDC22 "Explore USD tools and rendering" implemented only in AR Quick Look or has it become an integral part of the SceneKit and RealityKit rendering engines?
Post not yet marked as solved
If you use SceneKit with ARKit, the AR scene uses the SceneKit renderer.
Should you use SCNScene.write() to create a USDZ file and then open the USDZ file with AR Quick Look, AR Quick Look renders the AR scene with the RealityKit renderer.
The ARKit-in-app -> USDZ -> AR Quick Look renderers are not the same and could produce different appearances.
Have you seen similar problems with SceneKit -> AR Quick Look rendering?
I am using such a pipeline with PBR lighting and have observed that the resulting differences in material properties are large. (The geometries are fine.) I have had to compensate by recreating the SCNScene materials with modified properties. The agreement between the app scene and the AR Quick Look scene is greatly improved but unfortunately still not acceptable for critical evaluation of commercial products in interior design.
Post not yet marked as solved
I updated my iPhone 12 Pro to iOS 16 beta, and the motion capture feature in ARKit seems stop functioning. I tried my own custom app (MoCáp) and BodyDetection sample code from Apple developer site, and they both don’t work. Anyone have the same issue?
Post not yet marked as solved
Hello, I am using YOLOv3 with Vision to classify objects during my AR session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be?
This is how I am currently transforming the bounding boxes.
Assumptions:
The app is in portrait mode
Vision request is performed with centerCrop and orientation .right.
Fix the coordinate origin of vision:
let newY = 1 - boundingBox.origin.y
let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height)
Undo center cropping of Vision:
let imageResolution: CGSize = currentFrame.camera.imageResolution
// Switching height and width because the original image is rotated
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
// Square inside of normalized coordinates.
let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight)
let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi)
Bring coordinates back to normalized form:
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight)
let newBox = boundingBox.applying(transformNormalize)
Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.)
let viewPort = sceneView.frame.size
let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort)
let newBox = boundingBox.applying(transformFormat)
Scale up to viewport size:
let viewPort = sceneView.frame.size
let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height)
let newBox = boundingBox.applying(transformScale)
Thanks in advance for any help!
Post not yet marked as solved
I am trying to use a custom model for ARBodyAnchor but when I run the program provided by Apple here:
https://developer.apple.com/documentation/arkit/content_anchors/capturing_body_motion_in_3d
I get the following error:
"BodyDetection No skeleton found for entity"
I can't tell what the difference is between the way Apple rigged their skeleton and the way I rigged mine.
Post not yet marked as solved
I'm trying to convert an obj model to usdz using onnly a color_mapxcrun usdz_converter Kudde_v03/Kudde_v03.obj ./Kudde_flower_2048.usdz -color_map Final_test_1/Textures/2048/Kudde_2048_flower_lagoon_color_map.png -normal_map Final_test_1/Textures/2048/Kudde_2048_normal_map.png -vThe model is converted fine and looks ok in Quick Look on my mac but when I look at it in Quick Look on my iPhone the model is too dark.If I open the obj file in XCode and SceneKit the model also looks fine after applying the color map to the diffuse option.It's like the lighting is all wrong in Quick look on iPhone. The issue is there in both object mode and AR mode.This is what i looks like on iPhone X Quick look https://ibb.co/MG69BVb (The preview in the Files app looks fine)and using quick look on my mac https://ibb.co/gM626ZfUsing Xcode https://ibb.co/zPgfr7fHeres my verbose output.usdz_converter
Version: 1.009
-v: Verbose output
Primitives:
Transform: /Kudde_v03
Transform: /Kudde_v03/Geom
GeomMesh: /Kudde_v03/Geom/ZBrush_defualt_group
bound material: /Kudde_v03/Materials/default
Replacing material
unbind material: /Kudde_v03/Materials/default
Binding to material /Kudde_v03/Materials/StingrayPBS_0
GeomScope: /Kudde_v03/Materials
ShadeMaterial: /Kudde_v03/Materials/default
ShadeMaterial: /Kudde_v03/Materials/StingrayPBS_0
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/pbr
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/Primvar
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/color_map
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/normal_map
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/ao_map
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/emissive_map
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/metallic_map
ShadeShader: /Kudde_v03/Materials/StingrayPBS_0/roughness_mapAny ides on what's going on here?Or how I can get my models to not be too dark in the phone?
Post not yet marked as solved
Short description
See ( https://youtu.be/fD6af6MaFRo ) to directly see the issue.
Description
The way a device (iPhone or iPad) is handled affects the ARKit result (camera tracking or scene reconstruction, since both are linked). A device handled in portrait position won’t behave the same way as a device handled in landscape position. One position gives a correct result, while the other doesn't: the camera tracking or the reconstructed scene has a visible error.
Which handled position gives a correct result?
It depends on the movement that the device performs :
It behaves correctly when you perform a pan with the device handled in portrait, or a tilt with the device handled in landscape.
The issue can be visible when you perform a pan with the device handled in landscape, or a tilt with the device handled in portrait.
In reality the device is not limited to only horizontal or vertical movements, so both portrait and landscape handle positions return an incorrect result depending on how much the wrong move was performed relative to device handle position.
Why some people might not have noticed it
This issue might go under radars for many AR applications, since they tend to:
Focus on one area in space
Turn around in a room in portrait position
Do a lot of movement but without really needing to have a precise ARKit result
What could be the root cause of this
Since we have no access to ARKit, we can only try to guess.
What we deduced is that the device should not move along its vertical, so not going in the direction of its front camera or lightning/usb-c connection. What could be the difference between the vertical and horizontal of the device? The only element we noticed is the camera sensor orientation, since it is a rectangle.
How to reproduce the issue
Since the issue is inside ARKit, every application on the App Store that we tested has this issue. So you have two choices:
Create a test project
Create a minimal ARKit project.
Use ArView with the session configured with ARWorldTrackingConfiguration (the issue might be present in other mode, just not tested).
Enable SceneReconstruction.
Use the ArView debugOptions .showSceneUnderstanding.
Aggregate environment data to generate a mesh by moving the device around.
Disable SceneReconstruction.
Handle the device in landscape position.
Do some 360° horizontally in the previously scanned environment.
Expected result:
When moving around in an environment, the displayed mesh provided by showSceneUnderstanding option is correctly aligned with the environment.
Current result:
When moving around in an environment, the displayed mesh is positioned incorrectly.
Download an AR app on the App Store:
Example of AR application:
- SiteScape
An example for you to see the result.
(SiteScape v1.3 on iPhone 12 Pro iOS 15.5).
- MetaScan
- CamTrackAr
A little bit harder to see the issue since only an anchor can be used as a reference to see an offset.
Important note
This issue is critical for the application we develop, we can’t release it with this bug. We understand that fixing this bug can take some time so we are doing our best to be patient. But, this issue has been reported since 16 Jun 2021 on Feedback Assistant (FB9184883), so we are waiting for almost a year now.
We have absolutely no visibility on how this issue is handled. So, we tried to use the Developer Technical Support (DTS) to ask help. We sent a message to the Apple Developer Program Support. We asked on Feedback Assistant. We learned nothing. What is the priority of this issue? Is even someone assigned to this issue right now? Is one year not enough time to fix it?
So can we have at least some information? And hopefully have this bug finally fixed.
Post not yet marked as solved
How do I overlay an annotation/detail popup to AR Models in QuickLook? This was done with the WWDC Trading Cards AR Model. I haven't been able to find any other info on this. Here is an article which has an image of what I am referring to
https://vrscout.com/news/apple-shows-off-ar-trading-cards-ahead-of-wwdc-2022/
Post not yet marked as solved
How can I start the first work in Apple and what applications do I need to make the first design?
Post not yet marked as solved
Hello.
We are developing an application that displays AR using markers, but we would like to make it possible to recognize the markers from as far away as possible, because when the camera recognizes them from a distance, the AR is not displayed until you get close to a certain distance.
We would like to set markers that ARKit can easily recognize, but what kind of markers can we expect to improve the accuracy?
If I set a simple marker that is easy to recognize even from a distance, it seems to not be judged in the first place, so I am asking this question.
Post not yet marked as solved
Im noticing about 450MB of memory footprint when loading a simple 2MB USDZ model.
To eliminate any mis-use of the frameworks on my part, I built a basic RealityKit app using Xcode's Augmented Reality App, with no code changes at all.
Im still seeing 450MB in Xcode gauges(so in debug mode)
When looking at memgraph, Im seeing IOAccelerator and IOSurface regions have 194MB and 131MB of dirty memory respectively.
Is this all camera-related memory?
In the hopes of reducing compute & memory, I tried disabling various rendering options on ARView as follows:
arView.renderOptions = [
.disableHDR,
.disableDepthOfField,
.disableMotionBlur,
.disableFaceMesh,
.disablePersonOcclusion,
.disableCameraGrain,
.disableAREnvironmentLighting
]
This brought it down to 300MB which is still quite a lot.
When I configure ARView.cameraMode to be nonAR its still 113MB
Im running this on iPhone 13 Pro Max which could explain some of the large allocations, but would still like to see opportunities to reduce the foot print.
When I use QLPreviewController same model (~2MB) takes only 27MB in Xcode-gauges.
Any ideas on reducing this memory footprint while using ARKit
Post not yet marked as solved
I'm trying to build an augmented reality app on my iphone. I made it in unity, and it builds and plays on an android, but crashes immediately on any iphone. I get this in my console in xcode.
2022-06-05 19:23:13.222039-0500 PieceoftheSkyWIP[12433:706015] Error loading /var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework: dlopen(/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework, 0x0109): Library not loaded: @rpath/AlgoSdk.framework/AlgoSdk
Referenced from: /private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework
Reason: tried: ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/System/Library/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file)
2022-06-05 19:23:13.222996-0500 PieceoftheSkyWIP[12433:706015] Error loading /var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework: dlopen(/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework, 0x0109): Library not loaded: @rpath/AlgoSdk.framework/AlgoSdk
Referenced from: /private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/UnityFramework
Reason: tried: ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/UnityFramework.framework/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/private/var/containers/Bundle/Application/4E214C36-6812-4936-951A-153FB4DA4454/PieceoftheSkyWIP.app/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file), ‘/System/Library/Frameworks/AlgoSdk.framework/AlgoSdk’ (no such file)
my coworker also tried to build the project on his mac M1 and got an actual error that says something like this -
building for ios but the linked and embedded framework was built for ios + ios simulator unity
I'm using Unity 2019.4.0
ARFoundation 4.1.1
ARCore 4.1.1
ARKit 4.1.1
Xcode 13.4.1
Post not yet marked as solved
Hello,
I'm using ARKit simultaneously with a custom SLAM and I'm trying to align ARKit origin with this SLAM. I just saw that the function setWorldOrigin was actually doing this job.
My question is : is it a bad habit to regularly use this function as my custom SLAM refines its own position or should I just use this function once at the beginning?
Thanks