I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network?
The contexts in which I am most excited about using AirTags are:
Gaming
Health / Fitness-focused apps
Accessibility features
Musical and other creative interactions within apps
I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here.
Alexander
AR / VR
RSS for tagDiscuss augmented reality and virtual reality app capabilities.
Posts under AR / VR tag
107 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm just starting to think about diving into augmented reality. Any suggestions on where I should start? For example, any good tutorials or white papers out there to get the get started with the basics and get some MVPs cranked out?
Hi
Is this possible to have RoomCaptureSession and ARSession together, as we need feature points
Pre-planning a project to use multiple 360 cameras setup un a grid to generate an immersive experience, hoping to use photogrammetry to generate 3D images of objects inside the grid. beeconcern.ca wants to expand their bee gardens, and theconcern.ca wants to use it to make a live immersive apiary experience. Still working out the best method for compiling, editing, rendering; have been leaning towards UE5, but still seeking advice.
There was no mention if the Vision Pro could be used outside. Several of the other AR/VR systems out there are prohibited from this (sensors overload). Can the Vision Pro be used in sunlight?
Thanks!
Hi i am student from London studying app development in Arizona. I have a few ideas for apps that I believe would add to the Apple AR experience. I was wondering where I should go about getting started in the development process.
Any Guidance would be much appreciated :)
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
In RealityComposerPro, I've set up a Custom Material that receives an Image File as an input. When I manually select an image and upload it to RealityComposerPro as the input value, I'm able to easily drive the surface of my object/scene with this image.
However, I am unable to drive the value of this "cover" parameter via shaderGraphMaterial.setParameter(name: , value: ) in Swift since there is no way to supply an Image as a value of type MaterialParameters.Value.
When I print out shaderGraphMaterials.parameterNames I see both "color" and "cover", so I know this parameter is exposed.
Is this a feature that will be supported soon / is there a workaround? I assume that if something can be created as an input to Custom Material (in this case an Image File), there should be an equivalent way to drive it via Swift.
Thanks!
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
This sample code for ObjectCapture API is working no more since latest beta
https://developer.apple.com/documentation/RealityKit/guided-capture-sample
What would be the best way to go about recognizing a 3D physical object, then anchoring digital 3D assets to it? I would also like to use occlusion shaders and masks on the assets too.
There's a lot of info out there, but the most current practices keep changing and I'd like to start in the right direction!
If there is a tutorial or demo file that someone can point me to that would be great!
Hello everyone,
I am trying to publish an augmented reality app into app store. I have prepared physical book and the app is used to scan some pages of that book and view some of the images in 3D. The app was rejected with "apps cannot require users to purchase unrelated products or engage in advertising or marketing activities to unlock app functionality" because image markers (my book) needed for it. Should I add in-app purchase to buy the book(it is free at the moment). I have seen toys being operated with apps. Just trying to understand how they would solve this issue because similar to my case the toy needs to be bought/obtained separately.
Thank you.
Hi everyone! I'm totally new to programming and trying to learn AR by working on a simple AR app.
Right now I have a tap gesture for loading my cat model [link to my code](re-tap to relocate the cat), however, I want to add a button for confirm my cat model's position and then make my tap function only work for my other models (ball/heart/fish models).
I have no idea how to make that happened. Can anyone give me some guide please??
Does anyone know if we will be able to airplay content from another Apple device, say an iPad or iPhone to the Vision Pro?
I am trying to make a world anchor where a user taps a detected plane.
How am I trying this?
First, I add an entity to a RealityView like so:
let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0))
let interactionEntity = Entity()
interactionEntity.name = "PLANE"
let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)])
interactionEntity.components.set(collisionComponent)
interactionEntity.components.set(InputTargetComponent())
anchor.addChild(interactionEntity)
content.add(anchor)
This:
Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking
Makes an empty entity and gives it a 2m by 2m by 2cm collision box
Attaches the collision entity to the anchor
Finally then adds the anchor to the scene
It appears in the scene like this:
Great! Appears to sit right on the wall.
I then add a tap gesture recognizer like this:
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
guard value.entity.name == "PLANE" else { return }
var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation)
let worldAnchor = WorldAnchor(transform: simd_float4x4(pose))
let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)])
model.transform = Transform(matrix: worldAnchor.transform)
realityViewContent?.add(model)
I ASSUME This:
Makes a world position from the where the tap connects with the collision entity.
Integrates the position and the collision plane's rotation to create a Pose3D.
Makes a world anchor from that pose (So it can be persisted in a world tracking provider)
Then I make a basic cube entity and give it that transform.
Weird Stuff: It doesn't appear on the plane.. it appears behind it...
Why, What have I done wrong?
The X and Y of the tap location appears spot on, but something is "off" about the z position.
Also, is there a recommended way to debug this with the available tools?
I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
Hello, I have created a view with a 360 image full view, and I need to perform a task when the user clicks anywhere on the screen (leave the dome), but no matter what I try, it just does not work, it doesn't print anything at all.
import SwiftUI
import RealityKit
import RealityKitContent
struct StreetWalk: View {
@Binding var threeSixtyImage: String
@Binding var isExitFaded: Bool
var body: some View {
RealityView { content in
// Create a material with a 360 image
guard let url = Bundle.main.url(forResource: threeSixtyImage, withExtension: "jpeg"),
let resource = try? await TextureResource(contentsOf: url) else {
// If the asset isn't available, something is wrong with the app.
fatalError("Unable to load starfield texture.")
}
var material = UnlitMaterial()
material.color = .init(texture: .init(resource))
// Attach the material to a large sphere.
let streeDome = Entity()
streeDome.name = "streetDome"
streeDome.components.set(ModelComponent(
mesh: .generatePlane(width: 1000, depth: 1000),
materials: [material]
))
// Ensure the texture image points inward at the viewer.
streeDome.scale *= .init(x: -1, y: 1, z: 1)
content.add(streeDome)
}
update: { updatedContent in
// Create a material with a 360 image
guard let url = Bundle.main.url(forResource: threeSixtyImage,
withExtension: "jpeg"),
let resource = try? TextureResource.load(contentsOf: url) else {
// If the asset isn't available, something is wrong with the app.
fatalError("Unable to load starfield texture.")
}
var material = UnlitMaterial()
material.color = .init(texture: .init(resource))
updatedContent.entities.first?.components.set(ModelComponent(
mesh: .generateSphere(radius: 1000),
materials: [material]
))
}
.gesture(tap)
}
var tap: some Gesture {
SpatialTapGesture().targetedToAnyEntity().onChanged{ value in
// Access the tapped entity here.
print(value.entity)
print("maybe you can tap the dome")
// isExitFaded.toggle()
}
}
Hi,
Allow me to explain the use case.
I have a business app that makes use of camera to scan (QR codes), and it also makes use of a handheld gun to scan barcodes.
I need assistance on how can I use either of above two on Vision Pro (Vision OS)? I have gone through the WWDC23 video, and the documentation as well on what is available @ camera and what isn't.
However, Is it possible to have access to the outside view using RealityView or any other 3D component? Or using existing classes to see what's outside and render it on the UI or any probable workaround on how to achieve this?
Any help is appreciated.
Thanks,
Lokesh Sehgal
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
Hi there!
I am working on an app that is designed to be a 3D “widget” that could sit on someone’s desk or shelf and continue to sit there even when a user isn’t actively interacting with it, as if it was a real object on a shelf.
Is there any way to place objects on surfaces for apps running in the Shared Space? From what I understand the position of a volume is determined by the user, and cannot be changed programatically. I understand Full Space apps have access to anchors and plane detection data, but I don’t want people to have to close everything else to use my app.
A few questions:
In the simulator, it seems that when I drag the volume around to try to place it on a surface, geometry inside of a RealityView can clip through “real” objects. Is this the expected behavior on a real device too?
If so, could using ARKit in a Full Space to position the volume, then switching back to a Shared Space, be an option?
Also, if the app is closed, and reopened, will the volume maintain its position relative to the user’s real-world environment?
Thanks!