Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

93 Posts
Sort by:
Post not yet marked as solved
1 Replies
29 Views
Where do I need to add one of these lines of code in order to enable people occlusion in my scene? config.frameSemantics.insert(.personSegmentationWithDepth) static var personSegmentationWithDepth: ARConfiguration.FrameSemantics { get } Here is the code: import UIKit import RealityKit import ARKit class ViewControllerBarock: UIViewController, ARSessionDelegate {     @IBOutlet var arView: ARView!     let qubeAnchor = try! Barock.loadQube()     var imageAnchorToEntity: [ARImageAnchor: AnchorEntity] = [:]     override func viewDidLoad() {         super.viewDidLoad()         arView.scene.addAnchor(qubeAnchor)         arView.session.delegate = self     }     func session(_ session: ARSession, didAdd anchors: [ARAnchor]){         anchors.compactMap { $0 as? ARImageAnchor }.forEach {             let anchorEntity = AnchorEntity()             let modelEntity = qubeAnchor.stehgreifWurfel!             anchorEntity.addChild(modelEntity)             arView.scene.addAnchor(anchorEntity)             anchorEntity.transform.matrix = $0.transform             imageAnchorToEntity[$0] = anchorEntity         }     }     func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {         anchors.compactMap { $0 as? ARImageAnchor }.forEach {             let anchorEntity = imageAnchorToEntity[$0]             anchorEntity?.transform.matrix = $0.transform         }     } }
Posted
by tuskuno.
Last updated
.
Post not yet marked as solved
5 Replies
329 Views
We have color images as reference images and they do work great, but the problem we are facing is as following, what we have is a coloring book for children, a sample image from the book is like below 1.This is what a physical Reference Image will look like Now coming to our use case, as i mentioned this is a coloringbook for children, so I tried to create a worst case(use as many colors as possible and color as much of the white space as possible) scenario image which looks like this Worst case physical image can be Now the issue is with the point 2, the arkit is not able to the image at all, I assume because I have used enough colors that now arkit looks at it as different image compared to the given referenceImage. Is there any way I can detect the worst case of the image also, may be by using monochrome(black and white) images? please suggest
Posted
by spk8699.
Last updated
.
Post not yet marked as solved
1 Replies
336 Views
It seems like applying .scaleToFit to an ARView object does not resize the view to the correct size. .aspectRatio(contentMode: .fit) also does not work. Tested on an iPhone 12 Pro, with a project nearly directly derived from the base RealityKit AR project given by Xcode, this results in a square view, rather than one with a 4/3 aspect ratio, which would match the sensor size. When forcing the view to the expected aspect ratio with .aspectRatio(CGSize(width: 3, height: 4), contentMode: .fit), I can confirm that the square is cropped from the 4/3 image, and not the inverse. This is not a huge deal since it seems all iPhones have 4/3 sensors, but this is annoying. My app needs to let the user see the entire image captured when taking a snapshot, and having to ensure this doesn't break in the future is a bit dull. Has anyone found a better workaround to this?
Posted Last updated
.
Post not yet marked as solved
1 Replies
407 Views
I am attempting to build an AR app using Storyboard and SceneKit. When I went to run an existing app I have already used it runs but nothing would happen. I thought this behavior was odd so I decided to start from scratch on a new project. I started with the default AR project for Storyboard and SceneKit and upon run it immediately fails with an unwrapping nil error on the scene. This scene file is obviously there. I am also given four build time warnings: Could not find bundle inside /Library/Developer/CommandLineTools failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] Conversion failed, will simply copy input to output. Copy failed file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn -> file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn error:Error Domain=NSCocoaErrorDomain Code=516 "“ship.scn” couldn’t be copied to “art.scnassets” because an item with the same name already exists." UserInfo={NSSourceFilePathErrorKey=/Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn, NSUserStringVariant=( I currently am unsure how to fix these errors? It appears as if they must be in the command line tools because after moving the device support files back to a stable version of Xcode the same issue is present. Is anyone else having these issues?
Posted Last updated
.
Post not yet marked as solved
1 Replies
212 Views
I'm using ARKit-CoreLocation library to present POIs in AR World, in iOS 14. But the thing is, I am not able to use ARCL's SceneLocationView because it is SCNView. So when I add it as a subview, it overlaps my ARView contents and creates a new ARSession, leaving my ARView in the background. Code: extension RealityKitViewController { typealias Context = UIViewControllerRepresentableContext<RealityKitViewControllerRepresentable> } class RealityKitViewController: UIViewController { let sceneLocationView = SceneLocationView() let arView = ARView(frame: .zero) let context : Context let pins: [Pin] var currentLocation : CLLocation? { return sceneLocationView.sceneLocationManager.currentLocation } init (_ context : Context, pins: [Pin]) { self.context = context self.pins = pins super.init(nibName: nil, bundle: nil) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func makeArView() { // Start AR session let session = arView.session let config = ARWorldTrackingConfiguration() config.planeDetection = [.horizontal, .vertical] session.run(config) // Add coaching overlay let coachingOverlay = ARCoachingOverlayView() coachingOverlay.session = session coachingOverlay.goal = .horizontalPlane coachingOverlay.delegate = context.coordinator arView.addSubview(coachingOverlay) arView.debugOptions = [.showFeaturePoints, .showAnchorOrigins, .showAnchorGeometry] // Handle ARSession events via delegate context.coordinator.view = arView session.delegate = context.coordinator } override func viewDidLoad() { super.viewDidLoad() // probably problem here sceneLocationView.frame = view.bounds arView.frame = sceneLocationView.bounds sceneLocationView.addSubview(arView) view.addSubview(sceneLocationView) addPins() } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() sceneLocationView.frame = view.bounds } override func viewWillAppear(_ animated: Bool) { DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) { super.viewWillAppear(animated) self.sceneLocationView.run() } } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) sceneLocationView.pause() } func addPins() { guard let currentLocation = currentLocation, currentLocation.horizontalAccuracy < 16 else { return DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { [weak self] in self?.addPins() } } self.pins.forEach { pin in guard pin.isLocation else { return } guard let location = pin.location else { return assertionFailure() } guard let image = UIImage(named: pin.image) else { return assertionFailure() } let node = LocationAnnotationNode(location : location, image: image) node.scaleRelativeToDistance = true sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: node) } } } // if you want to test it, you can try to place these pins to a location where you can easily get coordinates from Google Earth. struct RealityKitViewControllerRepresentable : UIViewControllerRepresentable { let pins = [Pin(image: "test", location: CLLocation(coordinate: CLLocationCoordinate2D(latitude: 0.03275742958, longitude: 0.32827424), altitude: 772.1489524841309), isLocation: true)] @Binding var arActivate : Bool func makeUIViewController(context: Context) -> RealityKitViewController { let viewController = RealityKitViewController(context, pins: pins) return viewController } func updateUIViewController(_ uiViewController: RealityKitViewController, context: Context) { } func makeCoordinator() -> Coordinator { Coordinator() } } class Coordinator: NSObject, ARSessionDelegate { weak var view: ARView? }
Posted
by GrandSir.
Last updated
.
Post not yet marked as solved
3 Replies
577 Views
Hello everyone, I created a 3D character animation in Blender and I would like to import it in Reality Composer. However i export the animation from Blender, it wont show in Composer, just the static object. My usdz scene has character parrented on animated armature. is there any way to import 3D character animation (made in 3D software) to Reality Composer? thnx
Posted
by Leka99.
Last updated
.
Post not yet marked as solved
0 Replies
182 Views
SPECIFIC ISSUE ENCOUNTERED I'm playing VR videos through my app using Metal graphics API with Cardboard XR Plugin for Unity. After the recent iOS 16 update (and Xcode 14 update too), videos in stereoscopic mode were flipped upside down and backwards. After trying to change sides manually in code, I only managed to show correct sides (it's not all upside down anymore), but when I turn the phone UP, the view is moving DOWN to the ground, and vice versa. Same issue for left-right phone moving. Also Unity-made popup question is shown on the wrong side (backside - shown in the video attachment) Here is the video of the issue for inverted (upside down flip) view: https://www.dropbox.com/s/wacnknu5wf4cif1/Everything%20upside%20down.mp4?dl=0 Here is the video of inverted moving: https://www.dropbox.com/s/7re3u1a5q81flfj/Inverted%20moving.mp4?dl=0 IMPORTANT: I did manage few times fixing it to work on local build, but when I build it for TestFlight, it is always inverted. WHAT I SUSPECT I found numerous other developers encountered this issue when they were using Metal. Back in the days when OpenGL ES 2 and 3 were still supported by Apple, it did fix the issue switching on one of those. But now since only Metal is supported with new Unity, there is no workaround, and also I would like to use Metal. DEVICE multiple iPhones running multiple iOS 16 versions has this issue. Specific OS version is 16.1 EXPECTED BEHAVIOR VR videos should show right side (not upside down image) and moving up should show upper part of the video, and vice versa. Same goes for left and right move. Currently everything is flipped, but not every time the same kind of flip. Sometimes in rare cases it's even shown correctly. VERSIONS USED What version of Google Cardboard are you using? Cardboard XR Plugin 1.18.1 What version of Unity are you using? 2022.1.13f1
Posted
by Bat01.
Last updated
.
Post not yet marked as solved
0 Replies
210 Views
I have a use case of showing translucent 3D Model in my AR app unless the plane gets detected. I am using ARKit and RealityKit. This experience already exists in AR Quick Look as shown in the below image. Any idea how can it be done?  I don't see any opacity api in ModelEntity I am loading the model using below code: ModelEntity.loadModelAsync(contentsOf: fileURL).sink( receiveCompletion: { [weak self] completion in if case let .failure(error) = completion { print("Unable to load a model due to error \(error)") } self?.cancellableLoadRequest?.cancel() self?.cancellableLoadRequest = nil }, receiveValue: { [weak self] modelEntity in self?.arView?.addModelEntity(newModelEntity: modelEntity) })
Posted Last updated
.
Post not yet marked as solved
0 Replies
225 Views
Our Motion Capture app is not working properly anymore on iOS 16.1. Existing app builds crash on iOS 16, so we are forced to deliver a build targeting iOS 16. But the iOS 16 app build has the issue that the feet of the character are always oriented vertically! I observe the same issue when running the body detection sample app from Apple by using devices with A12 and A13 chip. Is this a bug in iOS 16 or has the app code to be adapted for iOS 16?
Posted
by Uscher.
Last updated
.
Post marked as solved
1 Replies
223 Views
I am running into a strange bug where the exact same code compiles fine in one project but generates a compiler error in another project. In particular, I am trying to create an AnchorEnity from an ARAnchor. func addModelTo(anchor: ARAnchor) { let entityAnchor = AnchorEntity(anchor: anchor) ... } The compiler error message is not even consistent. Sometimes I get a single error message: Cannot convert value of type 'ARAnchor' to expected argument type 'AnchoringComponent.Target' Other times I get an error with two possible issues: No exact matches in call to initializer Candidate '() -> AnchorEntity' requires 0 arguments, but 1 was provided (RealityFoundation.AnchorEntity) Candidate expects value of type 'AnchoringComponent.Target' for parameter #1 (got '(anchor: ARAnchor)') I'm trying to track down why this sometimes causes an error and sometimes it does not. Any pointers?
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
1 Replies
249 Views
This is how my model looks like and it's collision shapes (pink color). My problem with this is that when I try to tap on the bottommost part of my model (the circle), I can't do that, because the uppermost part of the model's CollisionShape is in the way. This is how i tried to create an accurate collisionShape for my model: @objc private func placeObject() {           let entity = try! Entity.load(named: "Laryngeal")     let geom = entity.findEntity(named: "Geom")           for children in geom!.children {       let childModelEntity = children as! ModelEntity       childModelEntity.collision = CollisionComponent(shapes: [ShapeResource.generateConvex(from: childModelEntity.model!.mesh)])     }           let modelEntity = ModelEntity()     modelEntity.addChild(entity)           let anchorEntity = AnchorEntity(plane: .horizontal)     anchorEntity.addChild(modelEntity)     arView.installGestures([.all],for: modelEntity)     arView.scene.addAnchor(anchorEntity)         } So my question is, how can i create the most accurate collisionShape which perfectly fits my model?
Posted
by Pistifeju.
Last updated
.
Post not yet marked as solved
27 Replies
18k Views
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Posted
by alexander.
Last updated
.
Post not yet marked as solved
2 Replies
241 Views
I have an Iphone 12 Pro and want to use the AR function, but the picture is black everywhere. With Pokemon Go, with the measuring app and a random app for AR (to test). I have the latest update (IOS 16.1) I have assigned the required rights to all apps It has never worked
Posted
by Domebre.
Last updated
.
Post not yet marked as solved
1 Replies
423 Views
Hi there, I am happy to have found Reality Composer. As I continue to create content, I have run into some issues. When exporting my model into .OBJ format to upload to the composer, I notice that the objects are not solid. I have gone through the exporting process to make sure that they are, but when viewed in Reality Composer, the closest surfaces are not showing and the object looks see-through. Any idea of what is going on?
Posted Last updated
.
Post not yet marked as solved
0 Replies
221 Views
I have some working sample code, which projects a 3D point back to screen position:        let pt = camera.projectPoint(worldPosition,         orientation: .portrait,         viewportSize: viewportSize) However, as ARKit camera is moving, the 3D mesh will be updated later, I would like to "store" the ARKit camera data and calculate the final mesh with the earlier cameras. There are 2 approaches I tried: Tried to store the whole camera and restore it later, but I couldn't figure out how to do it Store the camera projectionMatrix and viewMatrix and run my own calculation later. But when I run my projectPoint, the projection is still not 100% correct. This is how I get projectMatrix and viewMatrix:        let projectionMatrix = camera.projectionMatrix(for: .portrait, viewportSize: camera.imageResolution, zNear: zNear, zFar: zFar)       let viewMatrix = camera.viewMatrix(for: .portrait)
Posted
by wwsea.
Last updated
.
Post not yet marked as solved
1 Replies
357 Views
I'm having an issue with depth values using iOS 16. I'm getting noisier depth data from my AVFCaptureSession on iOS 16 than iOS 15 and below. Has something changed for iOS 16? I'm using the dual wide camera. My session initialization (minus some exception handling for when the session fails to initialize) looks like: let depthDataOutput = AVCaptureDepthDataOutput() let defaultVideoDevice: AVCaptureDevice? = AVCaptureDevice.default( .builtInDualWideCamera, for: AVMediaType.video, position: .back ) let videoDeviceInput = AVCaptureDeviceInput(device: videoDevice) session.beginConfiguration() session.sessionPreset = AVCaptureSession.Preset.hd1920x1080 guard session.canAddInput(videoDeviceInput) else { return } ... var depthFormat: AVCaptureDevice.Format? if session.canAddOutput(depthDataOutput) { session.addOutput(depthDataOutput) depthDataOutput.alwaysDiscardsLateDepthData = true depthDataOutput.isFilteringEnabled = false guard let connection = depthDataOutput.connection(with: .depthData) else { return }     connection.isEnabled = true // Search for highest resolution available (by width)     depthFormat = videoDevice.activeFormat.supportedDepthDataFormats.filter({         kCVPixelFormatType_DepthFloat32         == CMFormatDescriptionGetMediaSubType($0.formatDescription)     }).max(by: {         fmt1, fmt2 in         CMVideoFormatDescriptionGetDimensions(fmt1.formatDescription).width           < CMVideoFormatDescriptionGetDimensions(fmt2.formatDescription).width     }) } ... session.commitConfiguration()
Posted
by ivyas.
Last updated
.
Post not yet marked as solved
1 Replies
315 Views
Is there an easy way to disable Ground shadows when viewing my 3D model on the iPhone (in Object view only - I would still like them in AR View) - This would be very handy to have as an option in Reality Converter. I found this article... https://developer.apple.com/documentation/RealityKit/ARView/RenderOptions-swift.struct/disableGroundingShadows ...but have no idea how to implement it since I'm only using Blender and Reality Converter to get the resulting USDZ models via airdrop to my phone. Do I need Xcode?? I really appreciate any assistance anyone can give here
Posted
by zissou.
Last updated
.
Post not yet marked as solved
1 Replies
249 Views
We are using realitykit, we have reference images, and we are able to detect the said reference image, now I want to dynamically capture the region from the camera stream where the physical image has appeared, not the whole screen but only the exact physical image region.?
Posted
by spk8699.
Last updated
.