Post not yet marked as solved
Hi,
Is the LiDAR scanner on the new iPad pro en iPhone 12 series a good device to make a 3D scan of an object? How high res would this be? And what is de ideal object size?
And also: can the camera system and LiDAR sensor work together to achieve a 3D model with texture?
Any help is much appreciated.
Kind regards, Sybren
Post not yet marked as solved
It seems that on iOS 13 & 14 environment probes are notably too dark, and never reach an acceptable brightness that matches the surrounding environment.
Is there any way to get "reasonable" IBL lighting in QuickLook that is not like 50% gray all the time, without resorting to hacks such as using emissive colors/textures? Clearly there must be something wrong with the IBL estimation, as the same scene in Google SceneViewer is very bright and nice under the same circumstances.
The issue reproduces for example with the QuickLook gallery; the ceramics piece there is nearly 100% white as per the USD file but renders dull and gray next to a physical ceramics piece.
Video of the issue:
drive.google.com/file/d/14mVQFTNe6pO_4tYNIvpZa9eAS2YzoVPO/view?usp=sharing
More pictures:
drive.google.com/drive/folders/1ej6g-gpBAu53z2Zn08eQFNZAkTmDA_XJ?usp=sharing
Note that in those pictures, all spheres in that grid are purely white, with varying degrees of metallic and roughness being the only difference. My expectation would be that the diffuse ones would appear "white" and not dark grey; seems impossible to get a "realistic" picture.
Happens here as well:
https://developer.apple.com/augmented-reality/quick-look/models/cupandsaucer/cup_saucer_set.usdz
Post not yet marked as solved
I'm currently trying to project a transparent .png file on to a flat surface (table/paper), but the shadow is giving me a gray box.
I am a huge fan of the shadows in Reality Composer, however I'm searching for an option to adjust the opacity or turn off the shadow.
Post not yet marked as solved
Hello,
We are having issue with an app that we have developed for our client which is having AR+VR experiences in one app for iPhone.
The app is working flawlessly on iPhones till iOS 13, but the VR is broke on iOS 14. After launching VR experience on an iPhone having iOS 14 the view keeps spinning rapidly.
We have noticed many people are facing same issue with their VR apps after updating their iPhone OS to iOS 14.
We have used Google VR SDK for VR development and EasyAR SDK for AR development and developed the app using Unity 2019.3 game engine.
After going through Google VR's website we noticed that they are pointing to a new open-source SDK/Plugin, i.e. Cardboard XR which works with iPhone having iOS 14. We have tried the open-source plugin with their own sample scene and observed that, although the VR works with iOS 14, there are other bugs/glitches within the plugin with which we will not be able to release the app.
This issue of iOS 14 being incompatible with Google VR SDK came at the end moment when we were releasing final distribution builds. We have waited for long from September end to see if the issue get's resolved in the next update of iOS 14 but even the current iOS 14.2.1 version is having the same issue.
So, could you please let us know by when or in which update, this issue will get resolved with Google VR SDK apps, developed using Unity.
If you have any alternate solution to this, please let us know.
I appreciate your quick response as our client is awaiting app release.
Thanks!
Post not yet marked as solved
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network?
The contexts in which I am most excited about using AirTags are:
Gaming
Health / Fitness-focused apps
Accessibility features
Musical and other creative interactions within apps
I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here.
Alexander
Post not yet marked as solved
Hello,
I am new to this amazing AR Developing world.
wanted to know if I desire to develop an app for the AR Glasses that are about to be launched in the future -
should I use the ARkit?
Post not yet marked as solved
From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model…
Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction?
(I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)
Post not yet marked as solved
When and where can I get the Object Capture App?
thanks Ralf
Post not yet marked as solved
I donnot know how to run it
Post not yet marked as solved
When I use a DirectionalLight with a Shadow component, I often see vibrating graphic artifacts, from wiggling at the edges to distracting herringbone patterns across flat surfaces.
To a lesser degree I also see similar artifacts along the edges of straight objects alongside ground shadows when .receivesLighting is set in sceneUnderstanding.
I discussed these with Michael in our RealityKit lab earlier. I understand this might be "shadow acne" and the shadow's depthBias parameter may effect it.
Can we have an overview of these issues and an approach to adjusting parameters to eliminate it?
Post not yet marked as solved
Is there some code to create the example GUI app you used in the demo?
Post not yet marked as solved
I've got a quick question regarding ARKits scene reconstruction. Is it possible to get the world coordinates for the faces/vertices that are part of the generated mesh or to select them individually?
After looking through the documentation at apple and tinkering with the example apps it does not seem possible working with the faces property of ARMeshGeometry, but the vertices property does return coordinates. Here's apples code-snippet on how to select specific vertices:
func vertex(at index: UInt32) -> SIMD3<Float> {
assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.")
let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index)))
let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee
return vertex
}
}
I've tried to place objects at those coordinates to see what they refer to, but they somehow end up in the middle of the room, far away from the mesh.. leaving me a bit confused as to what the vertices coordinates actually refer to.
I'd appreciate any answers on how to approach this!
Hi
I'm trying to become familiar with RealityKit 2. I'm trying to build the code in the session, but I'm getting compile errors.
Any advice?
Link to the sample code below
https://developer.apple.com/documentation/realitykit/building_an_immersive_experience_with_realitykit
Post not yet marked as solved
Hi, In SceneKit I pass custom parameters to a metal shader using a SCNAnimation, for example:
let revealAnimation = CABasicAnimation(keyPath: "revealage")
revealAnimation.duration = duration
revealAnimation.toValue = toValue
let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation)
material.addAnimation(scnRevealAnimation, forKey: "Reveal")
How would I do similar to a metal shader in RealityKit?
I saw in the Octopus example:
//int(params.uniforms().custom_parameter()[0])
But it's commented out and there is no example how to set the custom variable and animate it? (unless I missed it)
Great session BTW
Thanks
Post not yet marked as solved
Has anyone run into limitations with an 8gb RAM M1 Mac Mini? I'm expecting there are some compromises with only 8gb, but curious about real-world results.
It's impressive to see that the M1 Mac Mini is capable of running PhotogrammetrySession at all, unlike my 2020 Intel MBP. I'm planning to buy one just for this purpose.
The requirements in the slide from the presentation says that any M1 will work, whereas Intel chips need 16gb RAM and a 4gb AMD video card.
I'm inclined to get a Mac Mini with 16gb, but that config isn't available near me for pickup and delivery is more than a week out. If I knew that 8gb was enough to process 150 or so photos at high quality that's probably all I would need and could save $200 and get it immediately.
Side note: I've been doing photogrammetry on PCs for years and would run out of memory occasionally using Agisoft on a 64gb system, which I needed to upgrade to 128gb. Those were large datasets (500+ photos) covering several hundred square meters from a drone at high resolution. My object scanning needs won't be as demanding, however 8gb just doesn't seem like much to work with. But, I suppose that even Nvidia 3070 Ti's only have 8gb of video memory and the M1's unified memory architecture might make that a better comparison than thinking about traditional system memory...
Post not yet marked as solved
I'm not sure what happened, I'm pretty sure I wasn't having issues importing these file types a few weeks ago. Just to be sure I checked the webpage for Reality Converter and yup, GLTF is listed...
Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd
I haven't updated anything on my computer, so I dunno how anything could have changed. Anyone know what the deal could be?
Post not yet marked as solved
Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow.
Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!
Post not yet marked as solved
The CaptureSample App creates a depth map image (.TIF) and a gravity file (.TXT).
Whats the role of those files?
Are they used to calculate the scale of the object ONLY?
or do they contribute in other areas aswell to the algorithm?
Hi,
I'm getting this error code "-21" ... anyone know what it means?
cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)
2021-07-17 14:01:29.621817+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Using configuration: Configuration(isObjectMaskingEnabled: true, sampleOrdering: RealityFoundation.PhotogrammetrySession.Configuration.SampleOrdering.unordered, featureSensitivity: RealityFoundation.PhotogrammetrySession.Configuration.FeatureSensitivity.normal)
2021-07-17 14:01:29.669711+0200 HelloPhotogrammetry[2578:40148] Metal API Validation Enabled
2021-07-17 14:01:29.709715+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)")
Program ended with exit code: 1
Post not yet marked as solved
Hi, I'm using the AR Foundation in Unity for an AR app, there are some images as a marker and when I read them the app start the reproduction of an animation.
In Android it works perfect, but in IOS the animation doesn't loop, it's like the iphone just stop catching the marker, because the app play de animation but in some point the animation just stop, and I have to move the phone and look the marker again.
How can I make the animation loop works??.
Thanks.