I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output.
What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal?
Thanks!
Vision
RSS for tagApply computer vision algorithms to perform a variety of tasks on input images and video using Vision.
Posts under Vision tag
110 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello,
I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools?
Thanks!
Mehdi
Hi everyone,
I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances.
Current Approach
Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case.
Question
Are there any efficient algorithms or data structures I can use to improve performance?
If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
Hello,
I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode:
VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge?
Thanks
Dear Apple Developer Team,
I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController.
Currently, Vision supports several GS1 DataBar formats, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it.
Why This Feature Matters:
Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit.
Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology.
Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs.
Feature Request:
Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in:
VNBarcodeSymbology (for Vision framework)
DataScannerViewController (for VisionKit)
This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries.
I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed.
Thank you for your time and support!
Best regards
We are building an app which can reads texts. It can read english and Japanese normal texts successfully. But in some cases, we need to read Japanese tategaki (vertically aligned texts). But in that times, the same code gives no output. So, is there any need to change any configuration to read Japanese tategaki? Or is it really possible to read Japanese tategaki using vision framework?
lazy var detectTextRequest = VNRecognizeTextRequest { request, error in
self.resStr="\n"
self.words = [:]
// Get OCR result
guard let res = request.results as? [VNRecognizedTextObservation] else { return }
// separate the words by space
let text = res.compactMap({$0.topCandidates(1).first?.string}).joined(separator: " ")
var n = 0
self.wordArr=[[]]
self.xs = 1
self.ys = 1
var hs = 0.0 // To compare the heights of the words
// To get the original axis (top most word's axis), only once
for r in res {
var word = r.topCandidates(1).first?.string
self.words[word ?? ""] = [r.topLeft.x, r.topLeft.y]
if(self.cartLabelType == 1){
if(word?.components(separatedBy: CharacterSet(charactersIn: "//")).count ?? 0>2){
self.xs = r.topLeft.x
self.ys = r.topLeft.y
}
}
}
}
}
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
Based on the iPhone 14 Max camera, implement model recognition and draw a rectangular box around the recognized object. The width and height are calculated using LiDAR and displayed in centimeters on the real-time updated image.
Hello all... is there a way to close a contour if you have found say two points on each side top "extension"? see image attached. So in end desire a trapezoid type shape. Code example would be very appreciated. thank you :) Think I have it as a CGPath. So a way to edit a CGPath, or close the top from a top left to a top right point?
I have to decrease main window screen size when user open Immersive space in my project.
Using frame i try it but it not updated main window size it just update view frame.
We are using VNRecognizeTextRequest to detect text in documents, and we have noticed that even in some very clear and well-formatted documents, there are still instances where text blocks are missed. the live text also have the same issue.
End goal: to detect 3 lines, and 2 corners accurately. Trying contours but they are a bit off. Is there a way or settings in contours to detect corners and lines more accurately, maybe less an sharper edged/corner contours? Or some other API or methods please?
I would love an email please ;) thank you. 2. also an overlay/scale issue
Hello, I need help I desire to select/filter the contours on an image. Not sure best way to do that. Idea select/filter for bottom left most contour? see image attached please. also will need end points or court corners. and need contour to be fine line, smooth, ie accurate of the court end line and side lines only is desired. thank you :) or also glad for other ideas or api to determine the lines/corners I need.
glad to email to discuss if that is better/easier actually prefer that. thanks.
Hi everyone,
I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve:
Open the device's camera from a SwiftUI view.
Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model.
If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app.
I'm looking for guidance on:
Setting up live camera capture in SwiftUI.
Using Core ML and Vision frameworks for real-time object recognition in this context.
Managing navigation between views when the recognition condition is met.
Any advice, code snippets, or examples would be greatly appreciated!
Thanks in advance!
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
I’m trying to use the Vision framework in a Swift Playground to perform face detection on an image. The following code works perfectly when I run it in a regular Xcode project, but in an App Playground, I get the error:
Thread 12: EXC_BREAKPOINT (code=1, subcode=0x10321c2a8)
Here's the code:
import SwiftUI
import Vision
struct ContentView: View {
var body: some View {
VStack {
Text("Face Detection")
.font(.largeTitle)
.padding()
Image("me")
.resizable()
.aspectRatio(contentMode: .fit)
.onAppear {
detectFace()
}
}
}
func detectFace() {
guard let cgImage = UIImage(named: "me")?.cgImage else { return }
let request = VNDetectFaceRectanglesRequest { request, error in
if let results = request.results as? [VNFaceObservation] {
print("Detected \(results.count) face(s).")
for face in results {
print("Bounding Box: \(face.boundingBox)")
}
} else {
print("No faces detected.")
}
}
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
do {
try handler.perform([request]) // This line causes the error.
} catch {
print("Failed to perform Vision request: \(error)")
}
}
}
The error occurs on this line:
try handler.perform([request])
Details:
This code runs fine in a normal Xcode project (.xcodeproj).
I'm using an App Playground instead (.swiftpm).
The image is being included in the .xcassets folder.
Is there any way I can mitigate this issue? Please do not recommend switching to .xcodeproj, as I am making a submission for Apple's Swift Student Challenge, and they require that I use .swiftpm.
Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
After I played the audio for the entity the sound was very low and I wanted to adjust the sound size. No api is found. What should I do
if let audio = audioResources {
entity.playAudio(audio)
}