Hello,
I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
But I don't see any visible problems with the gestures.
I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit.
Does anyone have any thoughts on this?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
i have normal application flow and at one place i have to open immesive space for image seen 360 view but currently when ever i run application it start with immersive space my app normal flow is not start.What the isssue here?
I'm getting the following error in my swift build targeting VisionOS 2.0 :
" 'defaultDisplay' is unavailable in visionOS "
TLDR : how do I specify an initial window position in visionOS? The docs seem to be off? - see below.
The docs say it is available, but it is not, or at least my XCODE (Version 16.0 ) is throwing errors on it :
https://developer.apple.com/documentation/swiftui/scene/defaultwindowplacement(_:)
I know apple is opinionated about window placement in visionOS, and maybe it will never be available, but the docs say it is in visionOS 2.0+ and it sure would be nice to be able to specify a default position toward the bottom of one's FOV, etc .
Side-note -- the example code in that doc also has the issue that "Window" is not available in visionOS ( WindowGroup is ).
example code -- barely modified from example code in doc :
var body: some Scene {
WindowGroup("MyLilWindow", id: "MyLilWindow") {
TestView()
}
.windowResizability(.contentSize)
.defaultWindowPlacement { content, context in
let displayBounds = context.defaultDisplay.visibleRect
let size = content.sizeThatFits(.unspecified)
let position = CGPoint(
x: displayBounds.midX - (size.width / 2),
y: displayBounds.maxY - size.height - 140)
return WindowPlacement(position)
}
}
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
How can I guide users to set their preferred language in visionOS? At this point, the behavior seems to be different from that of iOS.
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro).
I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes.
But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models.
Is there a trick to getting them to actually smooth-out?
Topic:
Spatial Computing
SubTopic:
General
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below.
This is a screen capture in composer pro but the same thing happens in the Vision Pro.
Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success :
On the same windows wifi hotspot
On the same iPhone hotspot
On an other wifi hotspot
Tried to clear remote devices, still not recognized
tried to turn off and turn on developper mode still nothing
tried to reset network parameters
tried to restart headset
tried to restart Xcode
tried to restart mac
just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac
tried to restart mac and headset
tried to rename headset
tried to switch mac
tried 1 headset on at a time
tried to clean build folder
deleted contents of ~/Library/Developer/Xcode/DerivedData
tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true
tried to deactivate the firewall
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code:
@Observable
class SearchPlaceManager: NSObject {
var searchText: String = ""
let searchCompleter = MKLocalSearchCompleter()
var searchResults: [MKLocalSearchCompletion] = []
override init() {
super.init()
searchCompleter.resultTypes = .address
searchCompleter.delegate = self
}
@MainActor
func seachLocation() {
if !searchText.isEmpty {
searchCompleter.queryFragment = searchText
}
}
}
extension SearchPlaceManager: MKLocalSearchCompleterDelegate {
func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
withAnimation {
self.searchResults = completer.results
}
}
}
Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation range…)?
Here is the code snippets.
struct RealityViewTestView: View {
@State private var texts: [String] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
for text in texts {
if let textEntity = attachments.entity(for: text) {
textEntity.position.x = Float.random(in: -0.1...0.1)
content.add(textEntity)
}
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
texts.append(String(UUID().uuidString.prefix(6)))
}
}
ToolbarItem {
Button("Remove") {
texts.remove(at: Int.random(in: 0..<texts.count))
}
}
}
}
}
struct RealityViewTestView: View {
@State private var texts: [String] = []
@State private var entities: [Entity] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
// for text in texts {
// if let textEntity = attachments.entity(for: text) {
// textEntity.position.x = Float.random(in: -0.1...0.1)
// content.add(textEntity)
// }
// }
for entity in entities {
content.add(entity)
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
//texts.append(String(UUID().uuidString.prefix(6)))
let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)])
m.position.x = Float.random(in: -0.2...0.2)
entities.append(m)
}
}
ToolbarItem {
Button("Remove") {
//texts.remove(at: Int.random(in: 0..<texts.count))
entities.removeLast()
}
}
}
}
}
About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ?
For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ?
Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
Topic:
Spatial Computing
SubTopic:
General
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
If not, is there an API similar to RoomPlan that works on iPhones without LiDAR?
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space:
Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues)
Step 2: Press the crown button to exit the immersive environment
Step 3: Return to the immersive space subsequently
Step 4: Attempt to start the video capture
At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow.
This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}"
I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
I want to open a view in my App that contains a Model3D view and I want that object to rotate continuously around the Y axis while it is visible. Is it possible to animate the rotation of a Model3D view?
I've tried this code, but the object just sits there and doesn't rotate.
import RealityKit
import RealityKitContent
import SwiftUI
struct QuantumComputerArea: View {
@State var degreesRotating = 0.0
var body: some View {
VStack {
Model3D(named: "quantumComputer") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
.offset(x: -75, y: 0)
.rotation3DEffect(.degrees(degreesRotating), axis: (x: 0, y: 1, z: 0))
@unknown default:
fatalError()
} //phase
} //Model3D
.onAppear {
withAnimation(Animation.linear(duration: 10).repeatForever(autoreverses: false)) {
degreesRotating = 360
}
}
} //VStack
} //View
} //View
I'm probably missing something simple but if anyone has any suggestions (including use a RealityView) I'd be grateful for the advice.