Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
                    
                  
                Vision
RSS for tagApply computer vision algorithms to perform a variety of tasks on input images and video using Vision.
Posts under Vision tag
            
              
                75 Posts
              
            
            
              
                
              
            
          
          
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
              Post
Replies
Boosts
Views
Created
                    
                      Currently, the id of my actual visionpro device is different from the xcode that works on my macmini. I added a visionpro id to xcode, but I can't build it with my visionpro. Can I log out of the existing login to xcode and log in with the same ID as visionpro? However, the appleID created for visionpro does not have an Apple Developer membership, so there is no certificate that makes it run on the actual device. How can I add my visionpro ID from the ID with my apple developer membership to run the xcode project app on visionpro? This is the first time this is the first time.
                    
                  
                
                    
                      The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
                    
                  
                
                    
                      I'm playing with the new Vision API for iOS18, specifically with the new CalculateImageAestheticsScoresRequest API.
When I try to perform the image observation request I get this error:
internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
The code is pretty straightforward:
if let image = image {
    let request = CalculateImageAestheticsScoresRequest()
    
    Task {
        do {
            let cgImg = image.cgImage!
            let observations = try await request.perform(on: cgImg)
            let description = observations.description
            let score = observations.overallScore
            print(description)
            print(score)
        } catch {
            print(error)
        }
    }
}
I'm running it on a M2 using the simulator.
Is it a bug? What's wrong?
                    
                  
                
                    
                      Hey everyone,
I've been updating my code to take advantage of the new Vision API for text recognition in macOS 15. I'm noticing some very odd behavior though, it seems like in general the new Vision API consistently produces worse results than the old API. For reference here is how I'm setting up my request.
var request = RecognizeTextRequest()
request.recognitionLevel = getOCRMode()  // generally accurate
request.usesLanguageCorrection = !disableLanguageCorrection  // generally true
request.recognitionLanguages = language.split(separator: ",").map { Locale.Language(identifier: String($0)) }  // generally 'en'
let observations = try? await request.perform(on: image) as [RecognizedTextObservation]
Then I will process the results and just get the top candidate, which as mentioned above, typically is of worse quality then the same request formed with the old API.
Am I doing something wrong here?
                    
                  
                
                    
                      I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
                let radius: Float = 0.025
                let mass: Float = 0.04593 // 质量,单位:kg
                var inertia = 2/5 * mass * pow(radius, 2)
                let currentPosition = entity.position(relativeTo: nil)
                let distance = distance(currentPosition, rgfc.lastPosition)
                let deltaTime = Float(context.deltaTime)
                let speed = distance / deltaTime
                
                let C_d: Float = 0.47 //阻力系数
                let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
                entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
                entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
                let acceleration = speed / deltaTime
                let forceDirection = normalize(currentPosition - rgfc.lastPosition)
                
                let forceMultiplier: Float = 1.0
                let appliedForce = forceDirection * mass * acceleration * forceMultiplier
                entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
                
Also I try to applyImpulse but not addForce, like:
                let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
                    
                  
                
                    
                      Hi everyone,
I'm working on integrating object recognition from live video feeds into my existing app by following Apple's sample code. My original project captures video and records it successfully. However, after integrating the Vision-based object detection components (VNCoreMLRequest), no detections occur, and the callback for the request is never triggered.
To debug this issue, I’ve added the following functionality:
Set up AVCaptureVideoDataOutput for processing video frames.
Created a VNCoreMLRequest using my Core ML model.
The video recording functionality works as expected, but no object detection happens. I’d like to know:
How to debug this further? Which key debug points or logs could help identify where the issue lies?
Have I missed any key configurations? Below is a diff of the modifications I’ve made to my project for the new feature.
Diff of Changes:
(Attach the diff provided above)
Specific Observations:
The captureOutput method is invoked correctly, but there is no output or error from the Vision request callback.
Print statements in my setup function setForVideoClassify() show that the setup executes without errors.
Questions:
Could this be due to issues with my Core ML model compatibility or configuration?
Is the VNCoreMLRequest setup incorrect, or do I need to ensure specific image formats for processing?
Platform:
Xcode 16.1, iOS 18.1, Swift 5, SwiftUI, iPhone 11,
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any guidance or advice is appreciated! Thanks in advance.
                    
                  
                
                    
                      What is the immersive space projection method? erp, fisheye, cube
We want to achieve the same effect as Apple immersive
                    
                  
                
                    
                      Hi,
I'm trying to analyze images in my Photos library with the following code:
func analyzeImages(_ inputIDs: [String])
    {
        let manager = PHImageManager.default()
        let option = PHImageRequestOptions()
        option.isSynchronous = true
        option.isNetworkAccessAllowed = true
        option.resizeMode = .none
        option.deliveryMode = .highQualityFormat
        let concurrentTasks=1
        let clock = ContinuousClock()
        let duration = clock.measure {
            
            let group = DispatchGroup()
            let sema = DispatchSemaphore(value: concurrentTasks)
            for entry in inputIDs {
                if let asset=PHAsset.fetchAssets(withLocalIdentifiers: [entry], options: nil).firstObject {
                    print("analyzing asset: \(entry)")
                    group.enter()
                    sema.wait()
                    manager.requestImage(for: asset, targetSize: PHImageManagerMaximumSize, contentMode: .aspectFit, options: option) { (result, info) in
                        if let result = result {
                            Task {
                                print("retrieved asset: \(entry)")
                                let aestheticsRequest = CalculateImageAestheticsScoresRequest()
                                let fingerprintRequest = GenerateImageFeaturePrintRequest()
                                let inputImage = result.cgImage!
                                let handler = ImageRequestHandler(inputImage)
                                let (aesthetics,fingerprint) = try await handler.perform(aestheticsRequest, fingerprintRequest)
//                                save Results
                                print("finished asset: \(entry)")
                                sema.signal()
                                group.leave()
                            }
                        }
                        else {
                            group.leave()
                        }
                        
                    }
                }
            }
            group.wait()
        }
        print("analyzeImages: Duration \(duration)")
    }
When running this code, only two requests are being processed simultaneously (due to to the semaphore)... However, if I call the function with a large list of images (>100), memory usage balloons over 1.6GB and the app crashes. If I call with a smaller number of images, the loop completes and the memory is freed.
When I use instruments to look for memory leaks, it indicates no memory leaks are found, but there are 150+ VM:IOSurfaces allocated by CMPhoto, CoreVideo and CoreGraphics @ 35MB each. Shouldn't each surface be released when the task is complete?
                    
                  
                
                    
                      I'm trying to set up Facebook AI's "Segment Anything" MLModel to compare its performance and efficacy on-device against the Vision library's Foreground Instance Mask Request.
The Vision request accepts any reasonably-sized image for processing, and then has a method to produce an output at the same resolution as the input image. Conversely, the MLModel for Segment Anything accepts a 1024x1024 image for inference and outputs a 1024x1024 image for output.
What is the best way to work with non-square images, such as 4:3 camera photos? I can basically think of 3 methods for accomplishing this:
Scale the image to 1024x1024, ignoring aspect ratio, then inversely scale the output back to the original size. However, I have a big concern that squashing the content will result in poor inference results.
Scale the image, preserving its aspect ratio so its minimum dimension is 1024, then run the model multiple times on a sliding 1024x1024 window and then aggregating the results. My main concern here is the complexity of de-duping the output, when each run could make different outputs based on how objects are cropped.
Fit the image within 1024x1024 and pad with black pixels to make a square. I'm not sure if the border will muck up the inference.
Anyway, this seems like it must be a well-solved problem in ML, but I'm having difficulty finding an authoritative best practice.
                    
                  
                
                    
                      Hi,
I'm working with vision framework to detect barcodes. I tested both ean13 and data matrix detection and both are working fine except for the QuadrilateralProviding values in the returned BarcodeObservation. TopLeft, topRight, bottomRight and bottomLeft coordinates are rotated 90° counter clockwise (physical bottom left of data Matrix, the corner of the "L" is returned as the topLeft point in observation). The same behaviour is happening with EAN13 Barcode.
Did someone else experienced the same issue with orientation? Is it normal behaviour or should we expect a fix in next releases of the Vision Framework?
                    
                  
                
                    
                      Hi everyone,
I'm working on an iOS app that uses VisionKit and I'm exploring the .visualLookUp feature. Specifically, I want to extract the detailed information that Visual Look Up provides after identifying an object in an image (e.g., if the object is a flower, retrieve its name; if it’s a clothing tag, get the tag's content).
                    
                  
                
                    
                      When I try to open Immersive space I got error like below:-
HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
How to solve it any idea?
                    
                  
                
                    
                      Hi all,
I am developing an app that scans barcodes using VisionKit, but I am facing some difficulties.
The accuracy level is not at where I hope it to be at. Changing the “qualityLevel” parameter from balanced to accurate made the barcode reading slightly better, but it is still misreading some cases. I previously implemented the same barcode scanning app with AVFoundation, and that had much better accuracy. I tested it out, and barcodes that were read correctly with AVFoundation were read incorrectly with VisionKit . Is there anyway to improve the accuracy of the barcode reading in VisionKit? Or is this something that is built in and the developer cannot change? Either way, any ideas on how to improve reading accuracy would help.
Thanks in advance!
                    
                  
                
                    
                      if I set UIApplicationPreferredDefaultSceneSessionRole to UISceneSessionRoleImmersiveSpaceApplication then my Immersive Space for image is working fine but when I try with UIWindowSceneSessionRoleApplication this option and try to open Immersive space on particular sub screen then its not showing image in immersive space(Immersive space not open).
Any one have idea what the issue.
<key>UIApplicationSceneManifest</key>
<dict>
	<key>UIApplicationPreferredDefaultSceneSessionRole</key>
	<string>UIWindowSceneSessionRoleApplication</string>
	<key>UIApplicationSupportsMultipleScenes</key>
	<true/>
	<key>UISceneConfigurations</key>
	<dict>
		<key>UISceneSessionRoleImmersiveSpaceApplication</key>
		<array>
			<dict>
				<key>UISceneInitialImmersionStyle</key>
				<string>UIImmersionStyleFull</string>
			</dict>
		</array>
	</dict>
</dict>
My info.plist value as above
                    
                  
                
                    
                      Hello,
I've been dealing with a puzzling issue for some time now, and I’m hoping someone here might have insights or suggestions.
The Problem:
We’re observing an occasional crash in our app that seems to originate from the Vision framework.
Frequency: It happens randomly, after many successful executions of the same code, hard to tell how long the app was working, but in some cases app could run for like a month without any issues.
Devices: The issue doesn't seem device-dependent (we’ve seen it on various iPad models).
OS Versions: The crashes started occurring with iOS 18.0.1 and are still present in 18.1 and 18.1.1.
What I suspected: The crash logs point to a potential data race within the Vision framework.
The relevant section of the code where the crash happens:
guard let cgImage = image.cgImage else {
    throw ...
}
let request = VNCoreMLRequest(model: visionModel)
try VNImageRequestHandler(cgImage: cgImage).perform([request]) // <- the line causing the crash
Since the code is rather simple, I'm not sure what else there could be missing here.
The images sent here are uniform (fixed size).
Model is loaded and working, the crash occurs random after a period of time and the call worked correctly many times. Also, the model variable is not an optional.
Here is the crash log:
libobjc.A	objc_exception_throw
CoreFoundation	-[NSMutableArray removeObjectsAtIndexes:]
Vision	-[VNWeakTypeWrapperCollection _enumerateObjectsDroppingWeakZeroedObjects:usingBlock:]
Vision	-[VNWeakTypeWrapperCollection addObject:droppingWeakZeroedObjects:]
Vision	-[VNSession initWithCachingBehavior:]
Vision	-[VNCoreMLTransformer initWithOptions:model:error:]
Vision	-[VNCoreMLRequest internalPerformRevision:inContext:error:]
Vision	-[VNRequest performInContext:error:]
Vision	-[VNRequestPerformer _performOrderedRequests:inContext:error:]
Vision	-[VNRequestPerformer _performRequests:onBehalfOfRequest:inContext:error:]
Vision	-[VNImageRequestHandler performRequests:gatheredForensics:error:]
OurApp	ModelWrapper.perform
And I'm a bit lost at this point, I've tried everything I could image so far.
I've tried to putting a symbolic breakpoint in the removeObjectsAtIndexes to check if some library (e.g. crash reporter) we use didn't do some implementation swap. There was none, and if anything did some method swizzling, I'd expect that to show in the stack trace before the original code would be called. I did peek into the previous functions and I've noticed a lock used in one of the Vision methods, so in my understanding any data race in this code shouldn't be possible at all. I've also put breakpoints in the NSLock variants, to check for swizzling/override with a category and possibly messing the locking - again, nothing was there.
There is also another model that is running on a separate queue, but after seeing the line with the locking in the debugger, it doesn't seem to me like this could cause a problem, at least not in this specific spot.
Is there something I'm missing here, or something I'm doing wrong?
Thanks in advance for your help!
                    
                  
                
                    
                      After I played the audio for the entity the sound was very low and I wanted to adjust the sound size. No api is found. What should I do
if let audio = audioResources {
    entity.playAudio(audio)
}
                    
                  
                
                    
                      Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
                    
                  
                
                    
                      I’m trying to use the Vision framework in a Swift Playground to perform face detection on an image. The following code works perfectly when I run it in a regular Xcode project, but in an App Playground, I get the error:
Thread 12: EXC_BREAKPOINT (code=1, subcode=0x10321c2a8)
Here's the code:
import SwiftUI
import Vision
struct ContentView: View {
    var body: some View {
        VStack {
            Text("Face Detection")
                .font(.largeTitle)
                .padding()
            
            Image("me")
                .resizable()
                .aspectRatio(contentMode: .fit)
                .onAppear {
                    detectFace()
                }
        }
    }
    
    func detectFace() {
        guard let cgImage = UIImage(named: "me")?.cgImage else { return }
        let request = VNDetectFaceRectanglesRequest { request, error in
            if let results = request.results as? [VNFaceObservation] {
                print("Detected \(results.count) face(s).")
                for face in results {
                    print("Bounding Box: \(face.boundingBox)")
                }
            } else {
                print("No faces detected.")
            }
        }
        
        let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
        do {
            try handler.perform([request]) // This line causes the error.
        } catch {
            print("Failed to perform Vision request: \(error)")
        }
    }
}
The error occurs on this line:
try handler.perform([request])
Details:
This code runs fine in a normal Xcode project (.xcodeproj).
I'm using an App Playground instead (.swiftpm).
The image is being included in the .xcassets folder.
Is there any way I can mitigate this issue? Please do not recommend switching to .xcodeproj, as I am making a submission for Apple's Swift Student Challenge, and they require that I use .swiftpm.
                    
                  
                
                    
                      Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!