Post not yet marked as solved
Hi Everyone,
I'm making a broadcast app.
In this app I have an UIView and 3 buttons:
1 button for the ultra wide camera.
1 button for the wide camera.
1 button for the tele photo lens.
How can I display the camera view in the UIView if I pressed one the buttons?
Thanks,
Robby Flockman
Post not yet marked as solved
I'm using VNRecognizeTextRequest with:
request.recognitionLevel = .accurate
request.usesLanguageCorrection = false
request.recognitionLanguages = ["en-US", "de-DE"]
Basically code is taken from
https://developer.apple.com/documentation/vision/reading_phone_numbers_in_real_time
But when performs it by VNImageRequestHandler. I'm getting the following warning:
Could not determine an appropriate width index for aspect ratio 0.0062
Could not determine an appropriate width index for aspect ratio 0.0078
Could not determine an appropriate width index for aspect ratio 0.0089
...
I tried to use fast for recognitionLevel and it helped but results are not that good as in accurate level.
Can you suggest how to fix the problem accurate?
Post not yet marked as solved
Hi,
I found a problem with the vision framework in my app using iOS15. I write recognized text in a string and under ios15 the result is not in the right order.
Maybe a example would explain it better :-)
Text to scan:
Hello, my name is Michael and I am the programmer of an app
named Scan2Clipboard. Now I've focused a problem with VNRecognizeTextRequest and iOS 15.
Result under iOS 14:
Hello, my name is Michael and I am the programmer of an app
named Scan2Clipboard. Now I've focused a problem with VNRecognizeTextRequest and iOS 15.
Result under iOS15:
Hello, my name is Michael and I am the programmer of an app
VNRecognizeTextRequest and iOS 15.
named Scan2Clipboard. Now I've focused a problem with
I've tried some other app from the App Store (Scan&Copy, Quick Scan). They are showing the same behavior. They are using the vision framework to. Does anyone have this issue to?
Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
Post not yet marked as solved
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated.
This is my original code:
do {
// Perform VNDetectHumanHandPoseRequest
try handler.perform([handPoseRequest])
// Continue only when a hand was detected in the frame.
// Since we set the maximumHandCount property of the request to 1, there will be at most one observation.
guard let observation = handPoseRequest.results?.first else {
self.state = "no hand"
return
}
// Get points for thumb and index finger.
let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)
let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)
let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)
let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)
let wristPoints = try observation.recognizedPoints(forGroupKey: .all)
// Look for tip points.
guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],
let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],
let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],
let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {
self.state = "no tip"
return
}
guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],
let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],
let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],
let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {
self.state = "no index"
return
}
guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],
let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],
let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],
let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {
self.state = "no middle"
return
}
guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],
let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],
let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],
let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {
self.state = "no ring"
return
}
guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],
let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],
let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],
let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {
self.state = "no little"
return
}
guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {
self.state = "no wrist"
return
}
...
}
Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :
let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)
let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)
let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)
let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)
let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)
let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)
I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
Hey guys,
facing the issue that scanned documents on my iPhone 12 Pro Max with Files app are pretty bad quality. Guess it started with iOS 15 beta 3.
Unfortunately issue still persists with current non beta iOS 15 release.
It‘s the same on iPad OS 15.
When I launch ‚scan with iPhone’ using Preview app on macOS quality is good as always. Hence looks like issue is related on files app or PDF processing on iPhone.
Have anybody else seen the same?
Thanx and cheers,
Flory
Post not yet marked as solved
We are developing Credit Card Scanning feature in our app.
We came to VNDocumentCameraViewController for the image capturing step. (the text detection is done using Vision's text recognization API).
Implementation
VNDocumentCameraViewController *vc = [[VNDocumentCameraViewController alloc] init];
vc.delegate = self;
[self presentViewController:vc animated:YES completion:nil];
Query-1
Now, here we want to remove editing mode of VNDocumentCameraViewController when capturing fails for the card and we do want to allow only 1 image capturing.
We did not find anything for the same on documentation - https://developer.apple.com/documentation/visionkit/vndocumentcameraviewcontroller
Would you please help that how can we achieve that.
Query-2
Can we use Credit Card Scan feature of Safari (which appears when field is configured as Credit Card Number input) in the iOS Native application.
Is there any to do that? Please let us know.
Post not yet marked as solved
Hi folks,
I would like to ask if it's possible to keep the LIDAR/back camera running in the background while having open other app?
Thank you,
Matej
Post not yet marked as solved
How to support Korean language for VNDocumentCameraViewController?
Can't suported apply?
recognitionLanguages = ["ko-KR"]
Not applicable.
Please support Korean.