I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code:
@Observable
class SearchPlaceManager: NSObject {
var searchText: String = ""
let searchCompleter = MKLocalSearchCompleter()
var searchResults: [MKLocalSearchCompletion] = []
override init() {
super.init()
searchCompleter.resultTypes = .address
searchCompleter.delegate = self
}
@MainActor
func seachLocation() {
if !searchText.isEmpty {
searchCompleter.queryFragment = searchText
}
}
}
extension SearchPlaceManager: MKLocalSearchCompleterDelegate {
func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
withAnimation {
self.searchResults = completer.results
}
}
}
Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content.
This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView.
Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
if I set UIApplicationPreferredDefaultSceneSessionRole to UISceneSessionRoleImmersiveSpaceApplication then my Immersive Space for image is working fine but when I try with UIWindowSceneSessionRoleApplication this option and try to open Immersive space on particular sub screen then its not showing image in immersive space(Immersive space not open).
Any one have idea what the issue.
<key>UIApplicationSceneManifest</key>
<dict>
<key>UIApplicationPreferredDefaultSceneSessionRole</key>
<string>UIWindowSceneSessionRoleApplication</string>
<key>UIApplicationSupportsMultipleScenes</key>
<true/>
<key>UISceneConfigurations</key>
<dict>
<key>UISceneSessionRoleImmersiveSpaceApplication</key>
<array>
<dict>
<key>UISceneInitialImmersionStyle</key>
<string>UIImmersionStyleFull</string>
</dict>
</array>
</dict>
</dict>
My info.plist value as above
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below.
This is a screen capture in composer pro but the same thing happens in the Vision Pro.
Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
When I try to open Immersive space I got error like below:-
HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
How to solve it any idea?
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits.
I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast.
But how can I access the scene instance?
I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test.
I tried the following test function:
@MainActor @Test func test() async throws {
var scene: RealityKit.Scene?
await withCheckedContinuation { continuation in
_ = RealityView(make: { content in
print("make")
let entity = Entity()
content.add(entity)
scene = entity.scene
continuation.resume()
})
}
#expect(scene != nil)
}
But this logs
◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation!
The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View.
So, is it possible at all to access the app's scene i a unit test?
How can I guide users to set their preferred language in visionOS? At this point, the behavior seems to be different from that of iOS.
i have normal application flow and at one place i have to open immesive space for image seen 360 view but currently when ever i run application it start with immersive space my app normal flow is not start.What the isssue here?
Hi,
When closing a WindowGroup, I want to show a prompt, and only dismiss the window when the user confirms.
How to do it?
WindowGroup(id: "A") {
ContentView()
}
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Hi,
We have been experimenting with VisionOS and we are in need to query the field of view values for each eye of the device. We are currently using drawable.views[0].tangents and drawable.views[1].tangents respectively which is labeled as 'Depreceated' . We wonder is there an alternative function for obrtaining FOVs since we had no luck to calcuate them out of the projection matrix we obtain from drawables.
Thanks
I use a AVplayer in a window view, I found that when I move the window to different positions, the default behavior is that the sound will change according to the window position. However, in some cases, I don't need this default behavior. I hope the sound doesn't change.
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer.
I was playing around in the Compositor Services demo and modified it to show the following.
I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below.
I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app.
The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here).
However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
Hi, Could anyone share some insights on how to get and track the 3D coordinates of real objects in the environment in playgrounds? I searched for some resources and noticed ARKit, Reality may be helpful but not sure how to do it.
I just downloaded and opened the sample code for Creating 3D models as movable windows. (Link: https://developer.apple.com/documentation/visionos/creating-a-volumetric-window-in-visionos ).
I opened the main view in the simulator (canvas), placed the volumentric window somewhere and then moved the camera a bit.
Expected:
I would expect volumentric windows to stay in place when I place them somewhere and then move around or look in a different direction.
Actually:
They don't stay in place. They slightly move with the camera.
Question:
Is this actual behavior expected?
Is this just a thing with the simulator and will not happen with real hardware?
I have a class with an Entity, on which I added a Spatial Audio component. Furthermore, I have a function, which uses the playAudio() method to start the Spatial Audio. During the first call of the function, everything is fine. If I call the again, the audio volume drops abruptly after a half second. It is very quiet.
Approximately, I have following code:
class VoiceOutputPlayer: NSObject, ObservableObject, AVAudioPlayerDelegate {
private var speechEntity = Entity()
func play() {
Task{
let audioRessource = try await AudioFileResource(contentsOf: urlWave)
self.speechEntity.playAudio(audioRessource)
}
}
func initSpatialAudio() -> Entity {
speechEntity.transform.translation.y = -0.37
speechEntity.transform.translation.z = 0.09
speechEntity.spatialAudio = SpatialAudioComponent(gain: Double.zero)
speechEntity.spatialAudio?.reverbLevel = -2
speechEntity.spatialAudio?.directivity = .beam(focus: 0.9)
speechEntity.orientation = .init(angle: .pi, axis: [0, 1, 0])
speechEntity.spatialAudio?.distanceAttenuation = .rolloff(factor: 1)
return speechEntity
}
}
Have visionOS 2.2 on the Apple Vision Pro and use Xcode 16.1
Copy of Feedback:
FB15969432
Improve window management with immersive spaces.
It is hard to manage windows from code when entering immersive space.
Look for instance at the sample:
https://developer.apple.com/documentation/visionos/displaying-a-3d-environment-through-a-portal
The window displayed before entering the virtual space stays there once the virtual space is entered : this window is too big but can't be resized by the program.
One could say this big window could be closed and a smaller window opened by the program with the "exit" button, but then this small window should be closed and the main window reopened when leaving te immersive space.
In the immersive space closing the "Exit" window with the X does not allow to leave the immersive space.
If the crown button is then used we go back to the Vision Pro main menu. If the app is chosen again we can see that it wasn't closed : the "Exit" window is now displayed but we are not in the immersive space!
Don't say "this is just a sample app", because all developers face those issues.
Please try to find the right solutions with your team to enhance this sample and share the right way to solve those issues. You could find that the specifications need to be enhanced.
You can also see that there is no way to exit a program from a program even if this is something that could be useful for some apps (you end a SharePlay game for instance)
Thank you very much for your time and consideration.
I found some snapshot API in developer documents, like blows:
RealityKit / Views and attachments / ARView / /snapshot(saveToHDR:completion:)
SceneKit / SCNView / snapshot()
Is there a similar API in visionOS?and if not, how can I implement snapshot for realityview and usdz?
Hello, I am creating a VisionOS application where a simple web browser is implemented in an immersive view by using WKWebView. Presentations cause a crash with "Presentations are not permitted within volumetric window scenes."