We are building an app which can reads texts. It can read english and Japanese normal texts successfully. But in some cases, we need to read Japanese tategaki (vertically aligned texts). But in that times, the same code gives no output. So, is there any need to change any configuration to read Japanese tategaki? Or is it really possible to read Japanese tategaki using vision framework?
lazy var detectTextRequest = VNRecognizeTextRequest { request, error in
self.resStr="\n"
self.words = [:]
// Get OCR result
guard let res = request.results as? [VNRecognizedTextObservation] else { return }
// separate the words by space
let text = res.compactMap({$0.topCandidates(1).first?.string}).joined(separator: " ")
var n = 0
self.wordArr=[[]]
self.xs = 1
self.ys = 1
var hs = 0.0 // To compare the heights of the words
// To get the original axis (top most word's axis), only once
for r in res {
var word = r.topCandidates(1).first?.string
self.words[word ?? ""] = [r.topLeft.x, r.topLeft.y]
if(self.cartLabelType == 1){
if(word?.components(separatedBy: CharacterSet(charactersIn: "//")).count ?? 0>2){
self.xs = r.topLeft.x
self.ys = r.topLeft.y
}
}
}
}
}
Post
Replies
Boosts
Views
Activity
In our app, we needed to use visionkit framework to lift up the subject from an image and crop it. Here is the piece of code:
if #available(iOS 17.0, *) {
let analyzer = ImageAnalyzer()
let analysis = try? await analyzer.analyze(image, configuration: self.visionKitConfiguration)
let interaction = ImageAnalysisInteraction()
interaction.analysis = analysis
interaction.preferredInteractionTypes = [.automatic]
guard let subject = await interaction.subjects.first else{
return image
}
let s = await interaction.subjects
print(s.first?.bounds)
guard let cropped = try? await subject.image else { return image }
return cropped
}
But the s.first?.bounds always returns a cgrect with all 0 values. Is there any other way to get the position of the cropped subject? I need the position in the image from where the subject was cropped. Can anyone help?
We are developing an app which uses AVCaptureSession. Here is a part of my code:
if context.config.basic_settings.auto_focus == 1 {
// 自動フォーカス
device.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
completion()
} else {
device.focusMode = AVCaptureDevice.FocusMode.locked
var _lenspos = Float(context.config.basic_settings.lens_position) ?? 0.5
if _lenspos > 1.0 { _lenspos = 1.0 }
if _lenspos < 0.0 { _lenspos = 0.0 }
device.setFocusModeLocked(lensPosition: _lenspos, completionHandler: { (time) in
completion()
})
}
This code can successfully set focus mode to autofoucs or manual focus and also can set lens position perfectly if we use iPhone13 or iPhone14. But If we use iPhone15, this code cannot set the focus mode or lensposition. We also tried with different iPhone15 devices, but the result is always the same(fails to set focus). And also used different iPhone13 and iPhone14 devices. But these iPhones can set the focus correctly everytime. Is it a bug of iPhone15? Or is there anything that I can do about the issue?