VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

VisionKit Documentation

Posts under VisionKit tag

65 Posts
Sort by:
Post not yet marked as solved
1 Replies
811 Views
Is there a framework that allows for classic image processing operations in real-time from incoming imagery from the front-facing cameras before they are displayed on the OLED screens? Things like spatial filtering, histogram equalization, and image warping. I saw the documentation for the Vision framework, but it seems to address high-level tasks, like object and recognition. Thank you!
Posted
by
Post marked as solved
2 Replies
770 Views
Trying to use VNGeneratePersonSegmentationRequest.. it seems to work but the output mask isn't at the same resolution as the source image.. so comping the result with the source produces a bad result. Not the full code, but hopefully enough to see what I'm doing. var imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height) let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)! let request = VNGeneratePersonSegmentationRequest() let handler = VNImageRequestHandler(cgImage: imageRef) do { try handler.perform([request]) guard let result = request.results?.first else { return } //Is this the right way to do this? let output = result.pixelBuffer //This ciImage alpha mask is a different resolution than the source image //So I don't know how to combine this with the source to cut out the foreground as they don't line up.. the res it's even the right aspect ratio. let ciImage = CIImage(cvPixelBuffer: output) ..... }
Posted
by
Post not yet marked as solved
1 Replies
626 Views
How can I achieve full control over Vision Pro's display and effectively render a 2D graph plot on it? I would appreciate guidance on the necessary steps or code snippets. P.s. As per Apple documentation For a more immersive experience, an app can open a dedicated Full Space where only that app’s content will appear. This still does not fulfill the 'flat bounded 2D' requirement as the Spaces provide an unbounded 3D immersive view.
Posted
by
Post not yet marked as solved
0 Replies
483 Views
First of all this vision api is amazing. the OCR is very accurate. I've been looking to multiprocess using the vision API. I have about 2 million PDFs I want to OCR, and I want to run multiple threads/run parallel processing to OCR each. I tried pyobjc but it does not work so well. Any suggestions on tackling this problem?
Posted
by
Post not yet marked as solved
2 Replies
632 Views
In VisionOS is it possible to detect when a user is touching a physical surface in the real world and also to project 2D graphics on that surface? So imagine a windowless 2D app that is projected onto a surface, essentially turning a physical wall, table, etc. into a giant touchscreen? So kinda like this: https://appleinsider.com/articles/23/06/23/vision-pro-will-turn-any-surface-into-a-display-with-touch-control But I want every surface in the room to be touchable and be able to display 2D graphics on the face of that surface and not floating in space. So essentially turning every physical surface in the room into a UIView. Thanks!
Posted
by
Post not yet marked as solved
2 Replies
911 Views
Hello, I'm doing a iOS app and I'm trying to find a way to extract programmatically a person from his identity picture (and to leave behind the background) I'm watching WWDC "Lift subjects from images in your app" video (a really cool feature) and i'm wondering if this feature would be possible programmatically, without the need of a human person interaction. Thank you.
Posted
by
Post not yet marked as solved
0 Replies
441 Views
Using VNDocumentCameraViewController, if the document is automatically scanned, it cannot be obtained in the func documentCameraViewController (_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {}. If manual photography is used, data can be obtained May I ask how to solve it?
Posted
by
Post not yet marked as solved
0 Replies
540 Views
var accessibilityComponent = AccessibilityComponent() accessibilityComponent.isAccessibilityElement = true accessibilityComponent.traits = [.button, .playsSound] accessibilityComponent.label = "Cloud" accessibilityComponent.value = "Grumpy" cloud.components[AccessibilityComponent.self] = accessibilityComponent // ... var isHappy: Bool { didSet { cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy" } }
Posted
by
Post not yet marked as solved
0 Replies
369 Views
I notice that some WWDC sessions have a code tab (in addition to Overview and Transcript) but this session 10176 does not. I tried what code I could see on the video but it's obviously not a complete project. It would help if the authors of the Session 10176 video could add the code to the session.
Posted
by
Post marked as solved
1 Replies
498 Views
Hi Everyone, I'm having a strange crash on App launch with iOS16 when I have a reference to an iOS17 only framework in my code. Even if I wrap the code in #available, I still get the crash on launch; and the code isn't even called yet... just the existence of it causes the crash. Pretty strange I thought? The framework is VisionKit, and the code that causes the crash is if #available(iOS 17, *) { // .imageSubject is iOS17 only - but this causes // a crash on launch in iOS16 even with the #available check interaction.preferredInteractionTypes = .imageSubject } The crash is: Referenced from: <91ED5216-D66C-3649-91DA-B31C0B55DDA1> /private/var/containers/Bundle/Application/78FD9C93-5657-4FF5-85E7-A44B60717870/XXXXXX.app/XXXXXX Expected in: <AF01C435-3C37-3C7C-84D9-9B5EA3A59F5C> /System/Library/Frameworks/VisionKit.framework/VisionKit Any thoughts anyone?? I know the .imageSubject is iOS17 only, but the #available should catch it - no? Any why does it crash immediatley on launch, when that code is not even called? Odd!
Posted
by
Post not yet marked as solved
0 Replies
467 Views
I have a live text implementation on the following LiveTextImageView. However, after the view loads and the analyze code is run, none of the delegate methods fire when I interact with the Live View. Selecting text does not fire the textSelectionDidChange method, nor does highlightSelectedItemsDidChange fire when the live text button in the bottom right is pressed. I tried a few different implementations, including an approach where the delegate was defined on a separate class. I am running this on a iPhone 12 Pro I recently updated to 17.0.3. My goal is to be able to provide additional options to the user beyond the default live-text overlay options, after identifying when they have finished selecting text. // // LiveTextImageView.swift // import UIKit import SwiftUI import VisionKit class ImageAnalyzerWrapper { static let shared = ImageAnalyzer() private init() { } } struct LiveTextImageViewRepresentable: UIViewRepresentable { var image: UIImage func makeUIView(context: Context) -> LiveTextImageView { return LiveTextImageView(image: image) } func updateUIView(_ uiView: LiveTextImageView, context: Context) { } } class LiveTextImageView: UIImageView, ImageAnalysisInteractionDelegate, UIGestureRecognizerDelegate { var capturedSelectedText: String? let analyzer = ImageAnalyzerWrapper.shared let interaction = ImageAnalysisInteraction() init(image: UIImage) { super.init(frame: .zero) self.image = image let photoWrapper = PhotoWrapper(rawPhoto: image) let resizedPhoto = photoWrapper.viewportWidthCroppedPhoto(padding: 40) self.image = resizedPhoto self.contentMode = .scaleAspectFit self.addInteraction(interaction) interaction.preferredInteractionTypes = [] interaction.analysis = nil analyzeImage() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func analyzeImage() { if let image = self.image { Task { let configuration = ImageAnalyzer.Configuration([.text]) do { let analysis = try await analyzer.analyze(image, configuration: configuration) self.addInteraction(interaction) interaction.delegate = self interaction.analysis = analysis interaction.preferredInteractionTypes = .textSelection } catch { print("Error in live image handling") } } } } func interaction( _ interaction: ImageAnalysisInteraction, highlightSelectedItemsDidChange highlightSelectedItems: Bool) async { print("Highlighted items changed") } func interaction(_ interaction: ImageAnalysisInteraction, shouldBeginAt point: CGPoint, for interactionType: ImageAnalysisInteraction.InteractionTypes) async -> Bool { return interaction.hasInteractiveItem(at: point) || interaction.hasActiveTextSelection } func textSelectionDidChange(_ interaction: ImageAnalysisInteraction) async { print("Changed!") if #available(iOS 17.0, *) { capturedSelectedText = interaction.text print(capturedSelectedText ?? "") } } }
Posted
by
Post not yet marked as solved
0 Replies
459 Views
I use DataScannerViewController to scan barcode and text recognize, and then get the AVCaptureDevice to use torch on/off, but DataScannerViewController will stop scanning. DataScannerViewController has no related API to get AVCaptureDevice to use torch. Expected: Could use AVCaptureDevice to turn on/off torch, at the same time DataScannerViewController could scan.
Posted
by
Post not yet marked as solved
0 Replies
377 Views
I was looking for IP camera which is not very expensive. The key point is I should be able to convert its frames to CMSampleBuffer I would like to use images to make some basic analysis using Vision. So far I could not find any IP camera manufacturer supports SDK for Swift and iOS for this kind of study.
Posted
by
Post not yet marked as solved
0 Replies
857 Views
Hi guys, has any individual develper received Vision Pro dev kit or is it just aimed at big companies? Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project. After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do). Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected. Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project. Any info on when pre-orders might be possible? Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit? Thanks.
Posted
by