After I played the audio for the entity the sound was very low and I wanted to adjust the sound size. No api is found. What should I do
if let audio = audioResources {
entity.playAudio(audio)
}
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code:
@Observable
class SearchPlaceManager: NSObject {
var searchText: String = ""
let searchCompleter = MKLocalSearchCompleter()
var searchResults: [MKLocalSearchCompletion] = []
override init() {
super.init()
searchCompleter.resultTypes = .address
searchCompleter.delegate = self
}
@MainActor
func seachLocation() {
if !searchText.isEmpty {
searchCompleter.queryFragment = searchText
}
}
}
extension SearchPlaceManager: MKLocalSearchCompleterDelegate {
func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
withAnimation {
self.searchResults = completer.results
}
}
}
Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift
AR / VR
RealityKit
Reality Composer Pro
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation range…)?
This might be a very silly question, anyway I tried many ways and didn't find a solution:
My Mac mini M4 is basic version which only has 16GB memory, it is very shy for developing Vision Pro application and testing with the simulator (CleanMacX always warning me low memory), I want to debug and test the application directly on Vision Pro (+ my app need both two hands gestures which simulator might not support) in stead of simulator, is there any proper instructions on how I test/debug/run the App on VP device directly instead of on simulator in favor of speed?
Topic:
Spatial Computing
SubTopic:
General
Hi, I would like to train Gaussian splats from my object captures. So I need a pointcloud and camera positions together with the original photos taken to train GS In an app like postShot.
I could do this with Reality Capture, which supports exporting pointclouds and camera position but it does not do well with turntable photogrammetry.
While the Apple object capture API does produce really solid results with turntable images.
so my question is, can I export camera data from my object captures to use in another application? Or is there may be a plan to at this feature in the future?
It would be really helpful in creating ultra realistic, 3-D objects in Gaussian splat format.
Thanks for any isuggestions…
When using RoomPlan to collect data and processing it with StructureBuilder, the app crashes.
Crash thread:
RoomScanCore.offlineFloorPlanGeneration
How should I deal with this issue? I’ve already implemented crash capture, but no crash was logged—the app just crashes directly.
RoomScanCore.offlineFloorPlanGeneration
Is this behaviour expected? For example, if I'm using
let materials = [SimpleMaterial(color: .red, isMetallic: false)]
occlusion works normally, but with
let materials = [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)]
i can see my cube through real-world objects, like tables, columns, etc.
I'm getting the same behaviour if using CustomMaterial from shader and applying
customMaterial.blending = .opaque and customMaterial.blending = .transparent(opacity: ) respectively
Here is the code snippets.
struct RealityViewTestView: View {
@State private var texts: [String] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
for text in texts {
if let textEntity = attachments.entity(for: text) {
textEntity.position.x = Float.random(in: -0.1...0.1)
content.add(textEntity)
}
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
texts.append(String(UUID().uuidString.prefix(6)))
}
}
ToolbarItem {
Button("Remove") {
texts.remove(at: Int.random(in: 0..<texts.count))
}
}
}
}
}
struct RealityViewTestView: View {
@State private var texts: [String] = []
@State private var entities: [Entity] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
// for text in texts {
// if let textEntity = attachments.entity(for: text) {
// textEntity.position.x = Float.random(in: -0.1...0.1)
// content.add(textEntity)
// }
// }
for entity in entities {
content.add(entity)
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
//texts.append(String(UUID().uuidString.prefix(6)))
let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)])
m.position.x = Float.random(in: -0.2...0.2)
entities.append(m)
}
}
ToolbarItem {
Button("Remove") {
//texts.remove(at: Int.random(in: 0..<texts.count))
entities.removeLast()
}
}
}
}
}
About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
I want a sentence custom hover effect, not a button.
I want a hover effect when you look at one sentence out of many sentences.
So I searched for reference videos https://youtu.be/DftRTx1oX6E , https://developer.apple.com/videos/play/wwdc2023/10110/ on apple youtube and visionOS documentation.
But I haven't gotten anywhere near my wish feature yet.
I respectfully request someone to help me. :)
Im just trying to setup some kind of visual on my iPad for testing and play purposes... I tried adding iPad to both APPLE Visual APPS.
I get this error for both
Building for 'iphoneos', but '12.0' must be >= '18.0'
Topic:
Spatial Computing
SubTopic:
General
Does anyone have any idea if Apple plans to add in UDIM support for its 3D development? It is a real bummer to not have this feature and makes an otherwise clean USD pipeline kinda suck.
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi everyone,
I am wondering under which settings the camera(s) were set by the time they were calibrated.
For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving
intrinsicMatrixReferenceDimensions.
Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing.
However, recently I saw that there are focusing modes that potentially displace the lens' physical position.
Settings like:
AutoFocusRangeRestriction: none, near, far
setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state.
My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed.
In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2).
Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set.
Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes.
Feedback ID: FB15081190
Platform: iOS18
Tech: RealityView
Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions.
I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code.
So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place.
Thanks!