Hello!
I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing.
Console error when running on a device that is on visionOS 1.2:
Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC
Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI
Looks like MTKView may be using compositor services under the hood?
Any help would be great.
Thank you!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hello,
Does the Apple Vision Pro have an API for creating custom triggers for selecting things on the screen instead of the hand pinch gesture? For instance, using an external button/signal/controller instead of pinching fingers?
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
In Xcode 16 beta 1 and 3, when running a VisionOS 2 simulator on an SwiftUI app that ran successfully in VisionOS 1, I received the following crash at startup:
Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!"
I've gone through my code attempting to find any references to a plane method, but I have no such calls in my code, leading me to suspect that this is somehow related to VisionOS beta simulator code. Has anyone else run into this bug and worked around it somehow?
Create 3D models with Object Capture VS Create 3D models with MAC
1.After testing the model generated by the pictures taken on the mobile phone and comparing the .raw progress generated by the same set of data on the MAC side, the highest accuracy model has different effects. Sometimes the mobile phone model has higher accuracy, and sometimes the MAC model has higher accuracy. What are the two ends? The difference is that according to WWDC2023 MAC, a higher-precision model can be generated. However, in actual testing, it is possible that the integrity of MAC generation is not as good as that of the mobile phone. This is why.
2.Is it possible to set the accuracy of the generated model on the mobile phone?
Dear all,
I am experiencing some problems with the Drag Gesture in VisionOS. Typically, this gesture involves the user pinching an entity or, more commonly, a window, and moving/dragging it around. However, this is not always the case for entities (3D models) placed in the environment. It appears that the user can both pinch and drag and/or move the entity with their bare hands.
In the latter case, the onChange cycle doesn't always end if the user keeps their hands near the object, causing it to keep moving even if that is not what the user intends. This also occurs when the user is no longer hovering over that entity. Larger entities, more so than those in the demo "TransformingRealityKitEntitiesUsingGestures," close to the user seem to become attached to their hands, causing the gesture to continue indefinitely. Entities often move to unintended positions.
I believe that these two different behaviors within the same gesture container are intrinsically different: one involves pinching and dragging, while the other involves enabling hands physics, and it should be easy to distinguish between the two.
How can we correctly address this situation?
Thank you for your assistance
I'm currently streaming synchronised video and depth data from my iPhone 13, using AVFoundation, video set to AVCaptureSession.Preset.vga640x480. When looking at the corresponding images (with depth values mapped to a grey colour map), (both map and image are of size 640x480) it appears the two feeds have different fields of view, with the depth feed zoomed in and angled upwards, and the colour feed more zoomed out. I've looked at the intrinsics from both the depth map, and my colour sample buffer, they are identical.
Does anyone know why this might be?
My setup code is below (shortened):
import AVFoundation
import CoreVideo
class VideoCaptureManager {
private enum SessionSetupResult {
case success
case notAuthorized
case configurationFailed
}
private enum ConfigurationError: Error {
case cannotAddInput
case cannotAddOutput
case defaultDeviceNotExist
}
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTrueDepthCamera],
mediaType: .video,
position: .front)
private let session = AVCaptureSession()
public let videoOutput = AVCaptureVideoDataOutput()
public let depthDataOutput = AVCaptureDepthDataOutput()
private var outputSynchronizer: AVCaptureDataOutputSynchronizer?
private var videoDeviceInput: AVCaptureDeviceInput!
private let sessionQueue = DispatchQueue(label: "session.queue")
private let videoOutputQueue = DispatchQueue(label: "video.output.queue")
private var setupResult: SessionSetupResult = .success
init() {
sessionQueue.async {
self.requestCameraAuthorizationIfNeeded()
}
sessionQueue.async {
self.configureSession()
}
sessionQueue.async {
self.startSessionIfPossible()
}
}
private func requestCameraAuthorizationIfNeeded() {
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
break
case .notDetermined:
AVCaptureSession
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in
if !granted {
self.setupResult = .notAuthorized
}
self.sessionQueue.resume()
})
default:
setupResult = .notAuthorized
}
}
private func configureSession() {
if setupResult != .success {
return
}
let defaultVideoDevice: AVCaptureDevice? = videoDeviceDiscoverySession.devices.first
guard let videoDevice = defaultVideoDevice else {
print("Could not find any video device")
setupResult = .configurationFailed
return
}
do {
videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
} catch {
setupResult = .configurationFailed
return
}
session.beginConfiguration()
session.sessionPreset = AVCaptureSession.Preset.vga640x480
guard session.canAddInput(videoDeviceInput) else {
print("Could not add video device input to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.addInput(videoDeviceInput)
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
if let connection = videoOutput.connection(with: .video) {
connection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
else {
print("Cannot setup camera intrinsics")
}
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
} else {
print("Could not add video data output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
if session.canAddOutput(depthDataOutput) {
session.addOutput(depthDataOutput)
depthDataOutput.isFilteringEnabled = false
if let connection = depthDataOutput.connection(with: .depthData) {
connection.isEnabled = true
} else {
print("No AVCaptureConnection")
}
} else {
print("Could not add depth data output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
let depthFormats = videoDevice.activeFormat.supportedDepthDataFormats
let filtered = depthFormats.filter({
CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_DepthFloat16
})
let selectedFormat = filtered.max(by: {
first, second in CMVideoFormatDescriptionGetDimensions(first.formatDescription).width < CMVideoFormatDescriptionGetDimensions(second.formatDescription).width
})
do {
try videoDevice.lockForConfiguration()
videoDevice.activeDepthDataFormat = selectedFormat
videoDevice.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.commitConfiguration()
}
private func addVideoDeviceInputToSession() throws {
do {
var defaultVideoDevice: AVCaptureDevice?
defaultVideoDevice = AVCaptureDevice.default(
.builtInTrueDepthCamera,
for: .depthData,
position: .front
)
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
setupResult = .configurationFailed
session.commitConfiguration()
throw ConfigurationError.defaultDeviceNotExist
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if session.canAddInput(videoDeviceInput) {
session.addInput(videoDeviceInput)
} else {
setupResult = .configurationFailed
session.commitConfiguration()
throw ConfigurationError.cannotAddInput
}
}
Hello,
I'm experimenting with the PortalComponent and clipping behaviors. My belief was that, with some arbitrary plane mesh, I could have the entire contents of a single world entity that has a PortalCrossingComponent clipped to the boundaries of the plane mesh.
Instead, what I seem to be experiencing is that the mesh in the target world of the portal will actually display outside the plane boundaries.
I've attached a video that shows the boundaries of my world escaping the portal clipping / transition plane, and also show how, when I navigate below a certain threshold in the scene, I can see what appears to be the "clipped" world ( here, it is obvious to see the dimensions of the clipping plane ), but when I move above a certain level, it appears that the world contents "escape" the clipping behavior.
https://scale-assembly-dropbox.s3.amazonaws.com/clipping.mov
( I would have made the above a link but it is not a permitted domain - you can follow that link to see the behavior tho )
It almost seems as if "anything" with PortalCrossingComponent is allowed to appear in the PortalComponent 's parent scene, rather than being clipped by the PortalComponent 's boundary.
For reference, the code I'm using is almost identical to the sample code in this document:
https://developer.apple.com/documentation/realitykit/portalcomponent
with the caveat that I'm using a plane that has .positiveY clipping and portal crossing behaviors, and the clipping plane mesh is as seen in the video.
Do I misunderstand how PortalComponent is meant to be used? Or is there a bug in how it currently behaves?
I had got the Enterprise Developer Account , manage entitlements(com.apple.developer.arkit.barcode-detection.allow)
Use WWDC24‘s Spatial barcode & QR code scanning example‘s code.
When I run my project, my BarcodeDetectionProvider is ok, but at(for await anchorUpdate in barcodeDetection.anchorUpdates) is break ,I try more times call them ,but is useless.
@Example I call this func startBarcodeScanning at ContentView
var barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
var arkitSession = ARKitSession()
public func startBarcodeScanning() {
Task {
do {
barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
await arkitSession.queryAuthorization(for: [.worldSensing])
do
{
try await arkitSession.run([barcodeDetection])
print("arkitSession.run([barcodeDetection])")
}
catch
{
return
}
for await anchorUpdate in barcodeDetection.anchorUpdates
{
switch anchorUpdate.event {
case .added:
print("addEntity(myAnchor: anchorUpdate.anchor)")
addEntity(myAnchor: anchorUpdate.anchor)
case .updated:
print("UpdateEntity")
updateEntity(myAnchor: anchorUpdate.anchor)
case .removed:
print("RemoveEntity")
removeEntity()
}
}
//await loadInfo()
}
}
}
Hey Everyone, this is like my first post here in the apple forum.
I need your help to understand better Reality Kit and file exports, but let me explain.
I'm trying to create a little 3D Object editor, and it looks like to work pretty well using RealityViews and managing materials on the Entity.
I'm currently working with all the Beta Apis and I would like to export my entity into an .usdz or a .obj file.
I've found a method that allows me to create a .Reality File
let path = FileManager.default.urls(for: .documentDirectory,
in: .userDomainMask)[0].appendingPathComponent("model.reality")
try await self.appState.parentEntity.write(to: path)
but I now I don't know how to convert it into a .usdz or a .obj file, or otherwise any standard 3d format.
Do you have any idea on how could I do?
Thankyou so much!
Have a nice day ^^
I was trying to load an Entity by Entity(named: sceneName, in: realityKitContentBundle), which works for many of my .usda file. But this time I got an error:
Error loading asset from scene PinballTable.usda: failed to load '7058602595919186152 Scene (RealityFileAsset)Bundle/RealityKitContent-RealityKitContent-resources/RealityKitContent.reality/Scene_14.compiledscene' (Asset provider load failed: type 'RealityFileAsset' -- Failed to load compiled data for asset path 'Scene_14.compiledscene', due to error: Failed to deserialize asset data.)
Any ideas on why this won't work? I have checked the size of my .usda file it's around 42kb so I won't think it's sake of file size. Due to many .usda reference inside of this scene, I suspect that it might be the case the bundle cannot locate other usda reference. So I export the whole scene into .usdz file and it turns to 118kb. Wonder if this could be the only issue here that affect the loading result but this is what I have tried so far.
visionOS System: visionOS beta 2/simulator 1.1 (neither works)
XCode: 15.4/16.0 beta (neither works)
Hello 👋
following questions:
I am using a Simulator with VisionOS 2.0 installed on. I am trying to display a remote spatial image. But I cannot display it.
I am trying to use the new updates form Webkit (https://webkit.org/blog/15443/news-from-wwdc24-webkit-in-safari-18-beta/#spatial-media) and show the image in a webview. But I cannot make it work. The image is not shown.
In the native version I thought about the new quicklook features that would help to display the spatial media. But I think this is also just for local files. Right?
I downloaded the file before but no success.
Any Ideas how I can display remote spatial images?
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video
While using this sample project to convert SBS video into Spatial MVHEVC video, it cannot be recognized as spatial video on visionOS 2.0 Beta 3.
Platform and Version
Development Environment: Xcode 16 Beta 3
visionOS 2 Beta 3
Description of Problem
I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight.
When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned.
It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur.
Steps to Reproduce
Build a project that includes a fully immersive space and incorporates GroupActivity support.
Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method.
Upload the build to TestFlight.
Connect to a FaceTime call.
Open the app and enter a immersive space then press the button to activate the Group Activity.
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you!
Views with problems:
struct NAMEView: View {
@StateObject private var placeStore = PlaceStore()
var body: some View {
ZStack {
Group {
HStack(spacing: 2) {
Image(systemName: "mappin.circle.fill")
.font(.system(size: 50))
.symbolRenderingMode(.multicolor)
.accessibilityLabel("your location")
.accessibilityAddTraits([.isHeader])
.padding(.leading, 5.5)
VStack {
Text("\(placeStore.locationName)")
.font(.title3)
.accessibilityLabel(placeStore.locationName)
Text("You are here in App")
.font(.system(size: 13))
.foregroundColor(.secondary)
.accessibilityLabel("You are here in App")
}
.hoverEffect { effect, isActive, _ in
effect.opacity(isActive ? 1 : 0)
}
.padding()
}
}
.onAppear {
placeStore.updateLocationName()
}
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect.clipShape(.capsule.size(
width: isActive ? proxy.size.width : proxy.size.height,
height: proxy.size.height,
anchor: .leading
))
.scaleEffect(isActive ? 1.05 : 1.0)
}
}
}
}
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
In the apple map of some areas, there will be a very realistic real-life 3D map. And now I want to call it through 3d in visionOS (like model3d). How can I call it?
Note: What I ask for is not to have an effect similar to 3d on a flat screen like in iOS, but to display the USDZ model in visionOS.
I have a swiftUI panel poped up in my scene.
And how to set the min and max size of a swiftUI for users' resizing?
I can‘t Figure Out How to Get My Earth Entity to Rotate on its Axis. This is a follow up post from a previous Apple Developer forum post.
How would I have the earth (parent) entity rotate CCW underneath the orbiting starship child?
I tried adding the following code block to the RealityView but it is not working:
if let rotatingEarth = starshipEntity.findEntity(named: "Earth") {
rotatingEarth.transform.rotation = simd_quatf.init(angle: 360, axis: SIMD3(x: 0, y: 1, z: 0))
if let animation = try? AnimationResource.generate(with: rotatingEarth as! AnimationDefinition) {
rotatingEarth.playAnimation(animation)
}
}
Any advice on getting the earth to rotate?
I tried reviewing the Hello World WWDC23 project code, but I was unable to understand the complexity and how that sample project got the earth to rotate.
i want to do this for visionOS 1.2. I realize there are some new animation and possible other capabilities coming up in vision 2.0 but I want to try to address this issue in the current released visionOS version.
I have an application that is meant to be a "watch together" GroupActivity using SharePlay that coordinates video playback using AVPlayerPlaybackCoordinator. In the current implementation, the activity begins before opening the AVPlayer, however when clicking the back button within the AVPlayer view, the user is prompted to "End Activity for Everyone" or "End Activity for just me". There is not an option to continue the group activity. My goal is to retain the same GroupSession, even if a user exits the AVPlayer view. Is there a way to avoid ending the session when coordinating playback using the AVPlayerPlaybackCoordinator?
private func startObservingSessions() async {
sessionInfo = .init()
// Await new sessions to watch video together.
for await session in MyActivity.sessions() {
// Clean up the old session, if it exists.
cleanUpSession(groupSession)
#if os(visionOS)
// Retrieve the new session's system coordinator object to update its configuration.
guard let systemCoordinator = await session.systemCoordinator else { continue }
// Create a new configuration that enables all participants to share the same immersive space.
var configuration = SystemCoordinator.Configuration()
// Sets up spatial persona configuration
configuration.spatialTemplatePreference = .sideBySide
configuration.supportsGroupImmersiveSpace = true
// Update the coordinator's configuration.
systemCoordinator.configuration = configuration
#endif
// Set the app's active group session before joining.
groupSession = session
// Store session for use in sending messages
sessionInfo?.session = session
let stateListener = Task {
await self.handleStateChanges(groupSession: session)
}
subscriptions.insert(.init { stateListener.cancel() })
// Observe when the local user or a remote participant changes the activity on the GroupSession
let activityListener = Task {
await self.handleActivityChanges(groupSession: session)
}
subscriptions.insert(.init { activityListener.cancel() })
// Join the session to participate in playback coordination.
session.join()
}
}
/// An implementation of `AVPlayerPlaybackCoordinatorDelegate` that determines how
/// the playback coordinator identifies local and remote media.
private class CoordinatorDelegate: NSObject, AVPlayerPlaybackCoordinatorDelegate {
var video: Video?
// Adopting this delegate method is required when playing local media,
// or any time you need a custom strategy for identifying media. Without
// implementing this method, coordinated playback won't function correctly.
func playbackCoordinator(_ coordinator: AVPlayerPlaybackCoordinator,
identifierFor playerItem: AVPlayerItem) -> String {
// Return the video id as the player item identifier.
"\(video?.id ?? -1)"
}
}
///
/// Initializes the playback coordinator for synchronizing video playback
func initPlaybackCoordinator(playbackCoordinator: AVPlayerPlaybackCoordinator) async {
self.playbackCoordinator = playbackCoordinator
if let coordinator = self.playbackCoordinator {
coordinator.delegate = coordinatorDelegate
}
if let activeSession = groupSession {
// Set the group session on the AVPlayer instances's playback coordinator
// so it can synchronize playback with other devices.
playbackCoordinator.coordinateWithSession(activeSession)
}
}
/// A coordinator that acts as the player view controller's delegate object.
final class PlayerViewControllerDelegate: NSObject, AVPlayerViewControllerDelegate {
let player: PlayerModel
init(player: PlayerModel) {
self.player = player
}
#if os(visionOS)
// The app adopts this method to reset the state of the player model when a user
// taps the back button in the visionOS player UI.
func playerViewController(_ playerViewController: AVPlayerViewController,
willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) {
Task { @MainActor in
// Calling reset dismisses the full-window player.
player.reset()
}
}
#endif
}