VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

VisionKit Documentation

Posts under VisionKit tag

65 Posts
Sort by:
Post not yet marked as solved
0 Replies
442 Views
Heard Vision Pro device is expected to be available in US market very soon. But it will be delayed for other markets. Any idea whether Apple still accepts applications to get Vision Pro Developer Kit in loan mode?
Posted
by divya_ms.
Last updated
.
Post not yet marked as solved
0 Replies
392 Views
In our app, we needed to use visionkit framework to lift up the subject from an image and crop it. Here is the piece of code: if #available(iOS 17.0, *) { let analyzer = ImageAnalyzer() let analysis = try? await analyzer.analyze(image, configuration: self.visionKitConfiguration) let interaction = ImageAnalysisInteraction() interaction.analysis = analysis interaction.preferredInteractionTypes = [.automatic] guard let subject = await interaction.subjects.first else{ return image } let s = await interaction.subjects print(s.first?.bounds) guard let cropped = try? await subject.image else { return image } return cropped } But the s.first?.bounds always returns a cgrect with all 0 values. Is there any other way to get the position of the cropped subject? I need the position in the image from where the subject was cropped. Can anyone help?
Posted
by utshas.
Last updated
.
Post not yet marked as solved
1 Replies
516 Views
Env Intel Core i7 macOS :14.0 Xcode 15 Beta 8 simulator:visionOS 1.0 beta 3(21N5233e) simulator: ios 17.0.1 ,ios 17.0 beta 8 Step Xcode create a new Vision Demo, it can't build. [macosx] error: Failed to find newest available Simulator runtime Command RealityAssetsCompile failed with a nonzero exit code
Posted
by arderbud.
Last updated
.
Post not yet marked as solved
0 Replies
817 Views
Hi guys, has any individual develper received Vision Pro dev kit or is it just aimed at big companies? Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project. After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do). Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected. Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project. Any info on when pre-orders might be possible? Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit? Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
351 Views
I was looking for IP camera which is not very expensive. The key point is I should be able to convert its frames to CMSampleBuffer I would like to use images to make some basic analysis using Vision. So far I could not find any IP camera manufacturer supports SDK for Swift and iOS for this kind of study.
Posted
by ucelen.
Last updated
.
Post not yet marked as solved
0 Replies
434 Views
I use DataScannerViewController to scan barcode and text recognize, and then get the AVCaptureDevice to use torch on/off, but DataScannerViewController will stop scanning. DataScannerViewController has no related API to get AVCaptureDevice to use torch. Expected: Could use AVCaptureDevice to turn on/off torch, at the same time DataScannerViewController could scan.
Posted
by RobinGao.
Last updated
.
Post not yet marked as solved
0 Replies
438 Views
I have a live text implementation on the following LiveTextImageView. However, after the view loads and the analyze code is run, none of the delegate methods fire when I interact with the Live View. Selecting text does not fire the textSelectionDidChange method, nor does highlightSelectedItemsDidChange fire when the live text button in the bottom right is pressed. I tried a few different implementations, including an approach where the delegate was defined on a separate class. I am running this on a iPhone 12 Pro I recently updated to 17.0.3. My goal is to be able to provide additional options to the user beyond the default live-text overlay options, after identifying when they have finished selecting text. // // LiveTextImageView.swift // import UIKit import SwiftUI import VisionKit class ImageAnalyzerWrapper { static let shared = ImageAnalyzer() private init() { } } struct LiveTextImageViewRepresentable: UIViewRepresentable { var image: UIImage func makeUIView(context: Context) -> LiveTextImageView { return LiveTextImageView(image: image) } func updateUIView(_ uiView: LiveTextImageView, context: Context) { } } class LiveTextImageView: UIImageView, ImageAnalysisInteractionDelegate, UIGestureRecognizerDelegate { var capturedSelectedText: String? let analyzer = ImageAnalyzerWrapper.shared let interaction = ImageAnalysisInteraction() init(image: UIImage) { super.init(frame: .zero) self.image = image let photoWrapper = PhotoWrapper(rawPhoto: image) let resizedPhoto = photoWrapper.viewportWidthCroppedPhoto(padding: 40) self.image = resizedPhoto self.contentMode = .scaleAspectFit self.addInteraction(interaction) interaction.preferredInteractionTypes = [] interaction.analysis = nil analyzeImage() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func analyzeImage() { if let image = self.image { Task { let configuration = ImageAnalyzer.Configuration([.text]) do { let analysis = try await analyzer.analyze(image, configuration: configuration) self.addInteraction(interaction) interaction.delegate = self interaction.analysis = analysis interaction.preferredInteractionTypes = .textSelection } catch { print("Error in live image handling") } } } } func interaction( _ interaction: ImageAnalysisInteraction, highlightSelectedItemsDidChange highlightSelectedItems: Bool) async { print("Highlighted items changed") } func interaction(_ interaction: ImageAnalysisInteraction, shouldBeginAt point: CGPoint, for interactionType: ImageAnalysisInteraction.InteractionTypes) async -> Bool { return interaction.hasInteractiveItem(at: point) || interaction.hasActiveTextSelection } func textSelectionDidChange(_ interaction: ImageAnalysisInteraction) async { print("Changed!") if #available(iOS 17.0, *) { capturedSelectedText = interaction.text print(capturedSelectedText ?? "") } } }
Posted Last updated
.
Post marked as solved
1 Replies
468 Views
Hi Everyone, I'm having a strange crash on App launch with iOS16 when I have a reference to an iOS17 only framework in my code. Even if I wrap the code in #available, I still get the crash on launch; and the code isn't even called yet... just the existence of it causes the crash. Pretty strange I thought? The framework is VisionKit, and the code that causes the crash is if #available(iOS 17, *) { // .imageSubject is iOS17 only - but this causes // a crash on launch in iOS16 even with the #available check interaction.preferredInteractionTypes = .imageSubject } The crash is: Referenced from: <91ED5216-D66C-3649-91DA-B31C0B55DDA1> /private/var/containers/Bundle/Application/78FD9C93-5657-4FF5-85E7-A44B60717870/XXXXXX.app/XXXXXX Expected in: <AF01C435-3C37-3C7C-84D9-9B5EA3A59F5C> /System/Library/Frameworks/VisionKit.framework/VisionKit Any thoughts anyone?? I know the .imageSubject is iOS17 only, but the #available should catch it - no? Any why does it crash immediatley on launch, when that code is not even called? Odd!
Posted Last updated
.
Post not yet marked as solved
11 Replies
4.6k Views
Hi team, We have an iOS app. Since July 15, 2022, our users met a kind of app crash due to an invalid memory fetch. The time is when Apple released iOS 16 beta officially. After Sep 12, crash count started to increase drastically. The time is Apple released iOS 16 officially. Crash backtrace can be seen as follows. Thread 14 Crashed: 0 libsystem_platform.dylib 0x00000001f8810930 _platform_memmove + 96 1 CoreGraphics 0x00000001adb64104 CGDataProviderCreateWithCopyOfData + 20 2 CoreGraphics 0x00000001adb4cdb4 CGBitmapContextCreateImage + 172 3 VisionKitCore 0x00000001ed813f10 -[VKCRemoveBackgroundResult _createCGImageFromBGRAPixelBuffer:cropRect:] + 348 4 VisionKitCore 0x00000001ed813cc0 -[VKCRemoveBackgroundResult createCGImage] + 156 5 VisionKitCore 0x00000001ed8ab6f8 __vk_cgImageRemoveBackgroundWithDownsizing_block_invoke + 64 6 VisionKitCore 0x00000001ed881474 __63-[VKCRemoveBackgroundRequestHandler performRequest:completion:]_block_invoke.5 + 436 7 MediaAnalysisServices 0x00000001eec58968 __92-[MADService performRequests:onPixelBuffer:withOrientation:andIdentifier:completionHandler:]_block_invoke.38 + 400 8 CoreFoundation 0x00000001abff0a14 __invoking___ + 148 9 CoreFoundation 0x00000001abf9cf2c -[NSInvocation invoke] + 428 10 Foundation 0x00000001a6464d38 __NSXPCCONNECTION_IS_CALLING_OUT_TO_REPLY_BLOCK__ + 16 11 Foundation 0x00000001a64362fc -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 520 12 Foundation 0x00000001a6a10f44 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188 13 libxpc.dylib 0x00000001f89053e4 _xpc_connection_reply_callout + 124 14 libxpc.dylib 0x00000001f88f8580 _xpc_connection_call_reply_async + 88 15 libdispatch.dylib 0x00000001b340205c _dispatch_client_callout3 + 20 16 libdispatch.dylib 0x00000001b341ff58 _dispatch_mach_msg_async_reply_invoke + 344 17 libdispatch.dylib 0x00000001b340956c _dispatch_lane_serial_drain + 376 18 libdispatch.dylib 0x00000001b340a214 _dispatch_lane_invoke + 436 19 libdispatch.dylib 0x00000001b3414e10 _dispatch_workloop_worker_thread + 652 20 libsystem_pthread.dylib 0x00000001f88a4df8 _pthread_wqthread + 288 21 libsystem_pthread.dylib 0x00000001f88a4b98 start_wqthread + 8 Last but not the least. The users who met this kind of app crash use iOS16+. We think this crash is related to iOS 16 SDK. We're appreciate that you can provide some clues how to fix this kind of crash.
Posted
by feiyz.
Last updated
.
Post not yet marked as solved
0 Replies
347 Views
I notice that some WWDC sessions have a code tab (in addition to Overview and Transcript) but this session 10176 does not. I tried what code I could see on the video but it's obviously not a complete project. It would help if the authors of the Session 10176 video could add the code to the session.
Posted
by DrLeach.
Last updated
.
Post not yet marked as solved
0 Replies
516 Views
var accessibilityComponent = AccessibilityComponent() accessibilityComponent.isAccessibilityElement = true accessibilityComponent.traits = [.button, .playsSound] accessibilityComponent.label = "Cloud" accessibilityComponent.value = "Grumpy" cloud.components[AccessibilityComponent.self] = accessibilityComponent // ... var isHappy: Bool { didSet { cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy" } }
Posted Last updated
.
Post marked as solved
6 Replies
2.5k Views
I just grabbed the portal code made available for testing and ran into this error when trying to run in simulator Vision Pro Thread 1: Fatal error: SwiftUI Scene ImmersiveSpace requires a UISceneSessionRole of "UISceneSessionRoleImmersiveSpaceApplication" for key UIApplicationPreferredDefaultSceneSessionRole in the Application Scene Manifest.
Posted Last updated
.
Post not yet marked as solved
0 Replies
417 Views
Using VNDocumentCameraViewController, if the document is automatically scanned, it cannot be obtained in the func documentCameraViewController (_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {}. If manual photography is used, data can be obtained May I ask how to solve it?
Posted
by vv12120.
Last updated
.
Post not yet marked as solved
0 Replies
578 Views
I trying to scan credit card but i am getting some issue with printed( Pressed ) number on card and some dark background with dark card numbers as well not scanning. Please help me to smooth scan for every cards. Thanks.
Posted Last updated
.
Post not yet marked as solved
7 Replies
1.9k Views
Hi, do you know if it's possible to handle the flashlight inside DataScannerViewController? I tried with AVCaptureDevice.DiscoverySession, but clearly when torchmode is on DataScannerViewController freeze... Thanks Luca
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.8k Views
Hi, When using VNFeaturePrintObservation and then computing the distance using two images, the values that it returns varies heavily. When two identical images (same image file) is inputted into function (below) that I have used to compare the images, the distance does not return 0 while it is expected to, since they are identical images. Also, what is the upper limit of computeDistance? I am trying to find the percentage similarity between the two images. (Of course, this cannot be done unless the issue above is resolved). Code that I have used is below func featureprintObservationForImage(image: UIImage) -> VNFeaturePrintObservation? {     let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!, options: [:])     let request = VNGenerateImageFeaturePrintRequest()     request.usesCPUOnly = true // Simulator Testing     do {       try requestHandler.perform([request])       return request.results?.first as? VNFeaturePrintObservation     } catch {       print("Vision Error: \(error)")       return nil     }   }   func compare(origImg: UIImage, drawnImg: UIImage) -> Float? {     let oImgObservation = featureprintObservationForImage(image: origImg)     let dImgObservation = featureprintObservationForImage(image: drawnImg)     if let oImgObservation = oImgObservation {       if let dImgObservation = dImgObservation {         var distance: Float = -1         do {           try oImgObservation.computeDistance(&distance, to: dImgObservation)         } catch {           fatalError("Failed to Compute Distance")         }         if distance == -1 {           return nil         } else {           return distance         }       } else {         print("Drawn Image Observation found Nil")       }     } else {       print("Original Image Observation found Nil")     }     return nil   } Thanks for all the help!
Posted
by chewethan.
Last updated
.
Post not yet marked as solved
2 Replies
838 Views
Hello, I'm doing a iOS app and I'm trying to find a way to extract programmatically a person from his identity picture (and to leave behind the background) I'm watching WWDC "Lift subjects from images in your app" video (a really cool feature) and i'm wondering if this feature would be possible programmatically, without the need of a human person interaction. Thank you.
Posted
by Gohoro.
Last updated
.