VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

VisionKit Documentation

Posts under VisionKit tag

29 Posts
Sort by:
Post not yet marked as solved
0 Replies
369 Views
I want to develop an app and using the camera to scan document into PDF with some processing on the document to make it clear as much as possible. is using the camera after grant it from the user of course is there any special arrangement or agreement with apple or I just can develop the application directly using the built in framework.
Posted
by mohhosny.
Last updated
.
Post not yet marked as solved
1 Replies
379 Views
Hi Everyone, I'm making a broadcast app. In this app I have an UIView and 3 buttons: 1 button for the ultra wide camera. 1 button for the wide camera. 1 button for the tele photo lens. How can I display the camera view in the UIView if I pressed one the buttons? Thanks, Robby Flockman
Posted Last updated
.
Post not yet marked as solved
0 Replies
356 Views
I'm using VNRecognizeTextRequest with: request.recognitionLevel = .accurate request.usesLanguageCorrection = false request.recognitionLanguages = ["en-US", "de-DE"] Basically code is taken from https://developer.apple.com/documentation/vision/reading_phone_numbers_in_real_time But when performs it by VNImageRequestHandler. I'm getting the following warning: Could not determine an appropriate width index for aspect ratio 0.0062 Could not determine an appropriate width index for aspect ratio 0.0078 Could not determine an appropriate width index for aspect ratio 0.0089 ... I tried to use fast for recognitionLevel and it helped but results are not that good as in accurate level. Can you suggest how to fix the problem accurate?
Posted
by andros.
Last updated
.
Post not yet marked as solved
2 Replies
560 Views
Hi, I found a problem with the vision framework in my app using iOS15. I write recognized text in a string and under ios15 the result is not in the right order. Maybe a example would explain it better :-) Text to scan: Hello, my name is Michael and I am the programmer of an app named Scan2Clipboard. Now I've focused a problem with VNRecognizeTextRequest and iOS 15. Result under iOS 14: Hello, my name is Michael and I am the programmer of an app named Scan2Clipboard. Now I've focused a problem with VNRecognizeTextRequest and iOS 15. Result under iOS15: Hello, my name is Michael and I am the programmer of an app VNRecognizeTextRequest and iOS 15. named Scan2Clipboard. Now I've focused a problem with I've tried some other app from the App Store (Scan&Copy, Quick Scan). They are showing the same behavior. They are using the vision framework to. Does anyone have this issue to?
Posted Last updated
.
Post not yet marked as solved
1 Replies
482 Views
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated. This is my original code:  do {       // Perform VNDetectHumanHandPoseRequest       try handler.perform([handPoseRequest])       // Continue only when a hand was detected in the frame.       // Since we set the maximumHandCount property of the request to 1, there will be at most one observation.       guard let observation = handPoseRequest.results?.first else {         self.state = "no hand"         return       }       // Get points for thumb and index finger.       let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)       let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)       let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)       let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)       let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)       let wristPoints = try observation.recognizedPoints(forGroupKey: .all)               // Look for tip points.       guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],          let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],          let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],          let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {         self.state = "no tip"         return       }               guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],          let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],          let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],          let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {         self.state = "no index"         return       }               guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],          let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],          let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],          let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {         self.state = "no middle"         return       }               guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],          let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],          let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],          let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {         self.state = "no ring"         return       }               guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],          let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],          let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],          let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {         self.state = "no little"         return       }               guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {         self.state = "no wrist"         return       } ... } Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :         let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)        let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)        let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)        let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)        let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)        let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue) I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
661 Views
We are developing Credit Card Scanning feature in our app. We came to VNDocumentCameraViewController for the image capturing step. (the text detection is done using Vision's text recognization API). Implementation VNDocumentCameraViewController *vc = [[VNDocumentCameraViewController alloc] init]; vc.delegate = self;      [self presentViewController:vc animated:YES completion:nil]; Query-1 Now, here we want to remove editing mode of VNDocumentCameraViewController when capturing fails for the card and we do want to allow only 1 image capturing. We did not find anything for the same on documentation - https://developer.apple.com/documentation/visionkit/vndocumentcameraviewcontroller Would you please help that how can we achieve that. Query-2 Can we use Credit Card Scan feature of Safari (which appears when field is configured as Credit Card Number input) in the iOS Native application. Is there any to do that? Please let us know.
Posted Last updated
.
Post not yet marked as solved
1 Replies
452 Views
How to support Korean language for VNDocumentCameraViewController? Can't suported apply? recognitionLanguages = ["ko-KR"] Not applicable. Please support Korean.
Posted Last updated
.