visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

VisionOS, passthrough through broadcast shows a black background
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too. The broadcast works except it returns a black screen. I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code. import Foundation import AVFoundation import ReplayKit class VideoAssetWriter { private var isRecording = false private var outputStream: OutputStream? private func setupConnection() { guard outputStream == nil else { return } print("setting up connection.") let serverIP = macIP let port = 12345 var readStream: Unmanaged<CFReadStream>? var writeStream: Unmanaged<CFWriteStream>? CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, serverIP as CFString, UInt32(port), &readStream, &writeStream) guard let writeStream = writeStream?.takeRetainedValue() else { print("Failed to create write stream") return } self.outputStream = writeStream as OutputStream self.outputStream?.open() } func startRecording() { isRecording = true } func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) { print("Processing Sample 1") guard isRecording else { return } print("Processing Sample 2") sendVideoChunkToServer(sampleBuffer) } private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) { guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } print("Processing Sample 3") let ciImage = CIImage(cvPixelBuffer: imageBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } print("Processing Sample 4") let image = UIImage(cgImage: cgImage) if let imageData = image.jpegData(compressionQuality: 0.5) { guard imageData.count <= 10_000_000 else { print("Frame too large: \(imageData.count) bytes") return } if outputStream == nil { setupConnection() } print("sending frame size up connection.") // Convert to network byte order (big-endian) var frameSize = UInt32(imageData.count).bigEndian let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size) _ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) } print("sending image data up connection.") // Send frame data _ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) } } } func stopRecording() { isRecording = false outputStream?.close() outputStream = nil } } This is the broadcast picker view wrapper: // Broadcast Picker View wrapper struct BroadcastButtonView: UIViewRepresentable { func makeUIView(context: Context) -> RPSystemBroadcastPickerView { let broadcastPickerView = RPSystemBroadcastPickerView( frame: CGRect(x: 0, y: 0, width: 200, height: 200) ) // Make sure this matches your broadcast extension bundle identifier broadcastPickerView.preferredExtension = "my-extension-bundle-identifier" broadcastPickerView.showsMicrophoneButton = false return broadcastPickerView } func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) { } } The extension SampleHandler: override func broadcastPaused() { print("paused broadcast") // User has requested to pause the broadcast. Samples will stop being delivered. } override func broadcastResumed() { print("resumed broadcast") // User has requested to resume the broadcast. Samples delivery will resume. } override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { print("broadcast received") assetWriter?.processVideoSampleBuffer(sampleBuffer) } Looking forward to any and all help. Information Property list: Information property list for the extension: The capabilities:
1
0
99
16h
RealityKit texture sampling behaviour
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this? Actual result (chart labels added in photoshop): Expected: Red > 0.1 Shader Graph
3
0
126
1d
Create Anchor on Objects from 2D Data
We're developing a VisionOS application, where we would like to do product recognition (like food items). We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction. Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item. How do we transform this 2D coordinate from the model prediction to a 3D anchor?
2
0
246
2d
Recursively searching the realityKitContentBundle
Hi Hopefully someone can share some ideas on how to accomplish this. I know we can load models from realityKitContentBundle like let model = try? await Entity(named: “testModel”, in: realityKitContentBundle) But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle) What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models. Any suggestion ? Thanks in advance. Guillermo
3
0
235
2d
Ground Shadows pass through objects
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below. This is a screen capture in composer pro but the same thing happens in the Vision Pro. Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
2
0
222
1w
Regarding the display of AppleArcade access points when playing iOS apps on visionOS.
[The problem that is occurring] The game apps in development are compatible with iOS, macOS, tvOS, and visionOS. In the Game app under development, the AppleArcade access point is placed in the main menu. In visionOS, when the main menu is opened, the GameCenter dashboard is automatically launched within 1~2 seconds after the main menu is displayed. This condition occurs every time the menu is re-opened. On iOS, macOS, and tvOS, the dashboard appears after pressing the GameCenter access point icon. [What you want to solve] We would like to make it so that the Game Center dashboard is launched after the access point icon is pressed on visionOS as it is on other operating systems. Or, if there is a standard implementation method for visionOS, please let us know.
1
0
174
6d
Position of volumetric and 2D windows on visionOS
Hi, I would like to create an app that has a volumetric window in the middle and two 2D windows on the sides. When I tried that, the 2D windows are positioned slightly below the volumetric window for some reason (image1). Looks like the base (hadle, close button, etc.) of the volumetric window is aligned with the center of the whole 2D window. I would like all the window bases to be aligned (image2). (I can of course do this manually by dragging the window down a bit with my hand, but that’s an inconvenience for my usecase.) I tried making the whole volumetric window content higher, but that did not help and the content actually went far above the 2D windows (image3). I suppose this was some design choice when creating the whole window positioning behavior on VisionOS. Am I doing something wrong? Is there a way to achieve what I want or a better way to customize the position of windows, not just 5 predefined positions in defaultWindowPlacement? Image1 - current: Image2 - what I want: Image3 - current, larger content: Code: import SwiftUI @main struct placementTestApp: App { @Environment(\.openWindow) var openWindowAction var body: some Scene { WindowGroup(id: "volume") { VolumetricWindowView() .onAppear { openWindowAction(id: "first") openWindowAction(id: "second") } } .windowStyle(.volumetric) .volumeWorldAlignment(.gravityAligned) WindowGroup(id: "first") { NormalWindowView() } .defaultWindowPlacement { _, context in if let mainWindow = context.windows.first(where: { $0.id == "volume" }) { WindowPlacement(.leading(mainWindow)) } else { WindowPlacement() } } WindowGroup(id: "second") { NormalWindowView() } .defaultWindowPlacement { _, context in if let mainWindow = context.windows.first(where: { $0.id == "volume" }) { WindowPlacement(.trailing(mainWindow)) } else { WindowPlacement() } } } }
1
0
120
1w
RealityKit attachments on macOS?
I'm building a SwiftUI+RealityKit app for visionOS, macOS and iOS. The main UI is a diorama-like 3D scene which is shown in orthographic projection on macOS and as a regular volume on visionOS, with some SwiftUI buttons, labels and controls above and below the RealityView. Now I want to add UI that is positioned relative to some 3D elements in the RealityView, such as a billboarded name label over characters with a "show details" button and such. However, it seems the whole RealityView Attachments API is visionOS only? The types don't even exist on macOS. Why is it visionOS only? And how would I overlay SwiftUI elements over a RealityView using SwiftUI code on macOS if not with attachments?
1
0
155
1w
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
3
0
248
1w
[Reality Composer Pro] Detecting Collision in RealityComposerPro scene
Hello! https://forums.developer.apple.com/forums/thread/762763 I read this thread, and this is similar to what I'm trying to do. I have two entities in the scene, "HandTrackingEntity", "HandScanner". "HandTrackingEntity": I put Anchoring Component, Collision Component (Trigger) here. "HandScanner": I put Behaviors Component(OnCollision), and Collision Component here. Here is the pictures how I set the components. and I set physicsSimulation property to .none. I was expecting that Timeline will be played when I put my hand(with HandTrackingEntity) on "HandScanner" entity. But it didn't work. Am I missing some steps? And I need sample codes to understand how to apply 'physicsSimulation' property. I'd appreciate it if you could let me know about it.
1
0
150
1w
How to obtain, FoV values for eyes VisionOS
Hi, We have been experimenting with VisionOS and we are in need to query the field of view values for each eye of the device. We are currently using drawable.views[0].tangents and drawable.views[1].tangents respectively which is labeled as 'Depreceated' . We wonder is there an alternative function for obrtaining FOVs since we had no luck to calcuate them out of the projection matrix we obtain from drawables. Thanks
0
0
137
2w
How to draw directly to the pixels of the Vision Pro screens?
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer. I was playing around in the Compositor Services demo and modified it to show the following. I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below. I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app. The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here). However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
0
1
184
2w
VisionOS - Enhancing Accessibility for Individuals with Visual Impairments
Hello, I am reaching out because I believe your product, the Vision Pro, could significantly improve the quality of life for individuals with visual impairments, and I thought my personal experience might be of interest to you. We could discuss this in more detail, but to respect your time, I’ll get straight to the point: I have retinitis pigmentosa, a rare retinal disease for which there is currently no treatment. This condition causes a progressive narrowing of the visual field (potentially leading to blindness) and a deficit in photoreceptors (let’s just say I’m not exactly a night owl). In my case, it has become impossible to go out alone in the dark or even see in dim light. (Goodbye evening parties—I can’t even find the entrance to a nightclub, let alone navigate the dance floor!). However, I’ve discovered that sometimes, simply looking through my phone screen and using its brightness helps me see much better. Over the years, I’ve imagined how amazing it would be if a pair of glasses could simply display the image my eyes are supposed to perceive, but with enhanced brightness. It would allow me to live my life as freely as others, whether that’s venturing out at night or finding that elusive pen lost in the depths of my apartment. I initially looked into the Google Glass project, for example, but it pales in comparison to what Apple is now creating, don’t you think? What amuses me most is that what some see as a tool that isolates users from reality could actually become an inclusion device for people like me, who would use it to go out and engage with the world. (I can’t count how many times I’ve gone home early in winter because of the anxiety caused by the early darkness, or turned down after-work gatherings with my DevOps colleagues.) The Vision Pro could simply restore reality for us by enhancing what has been progressively lost. And that’s just for nighttime! I can only imagine how helpful it could be during the day—for instance, by detecting obstacles or highlighting dangerous zones in a person’s limited field of vision. One could even use OCR technology to map the results of a visual field test and provide tailored assistance. What incredible potential… I dream of a day when ideas like these become a reality, and I wanted to share them with you. This wouldn’t just help me—it could help many others as well. Thank you for taking the time to read this message. I would be delighted to contribute in any way, should these development directions resonate with you now or in the future. Wishing you an excellent evening, Hugo Bled
1
0
253
2w