VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

Posts under VisionKit tag

49 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Feature Request – Support for GS1 DataBar Stacked in Vision Framework
Dear Apple Developer Team, I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController. Currently, Vision supports several GS1 DataBar formats, such as: VNBarcodeSymbology.gs1DataBar VNBarcodeSymbology.gs1DataBarExpanded VNBarcodeSymbology.gs1DataBarLimited However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it. Why This Feature Matters: Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit. Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology. Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs. Feature Request: Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in: VNBarcodeSymbology (for Vision framework) DataScannerViewController (for VisionKit) This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries. I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed. Thank you for your time and support! Best regards
2
5
280
2w
Inquiry About GS1 DataBar Stacked Support in Vision Framework
Hello, I am currently developing an application that requires barcode scanning using Apple’s Vision framework (VNBarcodeSymbology). I noticed that the framework supports several GS1 DataBar symbologies, such as: VNBarcodeSymbology.gs1DataBar VNBarcodeSymbology.gs1DataBarExpanded VNBarcodeSymbology.gs1DataBarLimited However, I could not find any explicit reference to support for GS1 DataBar Stacked (both regular and expanded variants). Could you confirm whether GS1 DataBar Stacked is currently supported in VisionKit's DataScannerViewController or VNBarcodeObservation? If not, are there any plans to include support for this symbology in a future iOS update? This functionality is critical for my use case, as GS1 DataBar Stacked barcodes are widely used in retail, pharmaceuticals, and logistics, where space constraints prevent the use of standard GS1 DataBar formats. I appreciate any clarification on this matter and would be happy to provide additional details if needed.
0
0
145
2w
Do the coordinates obtained by scanning a QR code with VNDetectBarcodesRequest match the coordinates of the finder pattern?
I am creating an application that uses VNDetectBarcodesRequest to read QR codes from images and adjust the image orientation to match that of the QR code finder pattern. The QR code was successfully read, and the coordinates of the QR code were obtained.Upon checking the obtained topLeft, topRight, and bottomLeft coordinates, they always seem to match the topLeft, topRight, and bottomLeft coordinates of the finder pattern. Is it specified that the coordinates of topLeft, topRight, and bottomLeft obtained with VNDetectBarcodesRequest match the topLeft, topRight, and bottomLeft of the finder pattern? Or do they just happen to match? I would appreciate it if you could tell me if the matching of coordinates is a specification. Thank you for your help.
0
0
180
3w
DetectBarcodesRequest operates differently among machines
Hi Apple Developers! I’m using the DetectBarcodesRequest function to identify QR codes in some images and PDFs. However, I’m facing an issue where the function doesn’t detect the barcode on certain documents and machines, while it works on others using the same document. The only common factor I’ve noticed is that the machines that successfully identify the QR code of the “problematic” document are all heavy developer machines that have Xcode installed. Interestingly, this doesn’t seem to be related to processor type (Intel vs. Apple Silicon). Could you please provide some guidance or leads on how to resolve this issue?
0
0
202
Jan ’25
OCR does not work
Hi, I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all. Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work This is the code from the scanview import SwiftUI import Vision import VisionKit struct ScanCardView: UIViewControllerRepresentable { @Binding var scannedCoordinates: [String: String] var useLettersForColumns: Bool var numberOfColumns: Int var numberOfRows: Int @Environment(.presentationMode) var presentationMode func makeUIViewController(context: Context) -> VNDocumentCameraViewController { let scannerVC = VNDocumentCameraViewController() scannerVC.delegate = context.coordinator return scannerVC } func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {} func makeCoordinator() -> Coordinator { return Coordinator(self) } class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate { let parent: ScanCardView init(_ parent: ScanCardView) { self.parent = parent } func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) { print("Escaneo completado, procesando imagen...") guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else { print("No se pudo obtener la imagen del escaneo.") controller.dismiss(animated: true, completion: nil) return } recognizeText(from: image) DispatchQueue.main.async { print("Finalizando proceso OCR y cerrando la cámara.") controller.dismiss(animated: true, completion: nil) } } func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) { print("Escaneo cancelado por el usuario.") controller.dismiss(animated: true, completion: nil) } func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) { print("Error en el escaneo: \(error.localizedDescription)") controller.dismiss(animated: true, completion: nil) } private func recognizeText(from image: CGImage) { let request = VNRecognizeTextRequest { (request, error) in guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else { print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))") DispatchQueue.main.async { self.parent.presentationMode.wrappedValue.dismiss() } return } let recognizedStrings = observations.compactMap { observation in observation.topCandidates(1).first?.string } print("Texto reconocido: \(recognizedStrings)") let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings) DispatchQueue.main.async { print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)") self.parent.scannedCoordinates = filteredCoordinates } } request.recognitionLevel = .accurate let handler = VNImageRequestHandler(cgImage: image, options: [:]) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([request]) print("OCR completado y datos procesados.") } catch { print("Error al realizar la solicitud de OCR: \(error.localizedDescription)") } } } private func filterValidCoordinates(from strings: [String]) -> [String: String] { var result: [String: String] = [:] print("Texto antes de filtrar: \(strings)") for string in strings { let trimmedString = string.replacingOccurrences(of: " ", with: "") if parent.useLettersForColumns { let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos if trimmedString.range(of: pattern, options: .regularExpression) != nil { print("Coordenada válida detectada (letras): \(trimmedString)") result[trimmedString] = "Valor" // Asignación de prueba } } else { let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99 if trimmedString.range(of: pattern, options: .regularExpression) != nil { print("Coordenada válida detectada (números): \(trimmedString)") result[trimmedString] = "Valor" } } } print("Coordenadas finales después de filtrar: \(result)") return result } } }
0
0
329
Jan ’25
How to customize VNDocumentCameraViewController
Hi, I’m learning MAUI and was trying to use VNDocumentCameraViewController provided by Vision Kit to scan documents and it is working fine but I realized that I was not able to customize some of the options provided by default like, disabling the auto scan option. Is there any way to disable the auto scan option or are there any other alternatives with the same founctionalities as VNDocumentCameraViewController that are more customizable?
1
0
323
Jan ’25
VisionKit: Improve barcode scanning accuracy
Hi all, I am developing an app that scans barcodes using VisionKit, but I am facing some difficulties. The accuracy level is not at where I hope it to be at. Changing the “qualityLevel” parameter from balanced to accurate made the barcode reading slightly better, but it is still misreading some cases. I previously implemented the same barcode scanning app with AVFoundation, and that had much better accuracy. I tested it out, and barcodes that were read correctly with AVFoundation were read incorrectly with VisionKit . Is there anyway to improve the accuracy of the barcode reading in VisionKit? Or is this something that is built in and the developer cannot change? Either way, any ideas on how to improve reading accuracy would help. Thanks in advance!
0
0
291
Dec ’24
About VisionKit DataScannerViewController
Hi I'm having a problem with DataScannerViewController, I'm using the volume barcode scanning feature in my app, prior to that I was using an AVCaptureDevice with the UltraWideAngle set. After discovering DataScannerViewController, we planned to replace the previous obsolete code with DataScannerViewController, all together it was ok, when I want to set the ultra wide angle, I don't know how to start. I tried to get the minZoomFactor and I realized that I get 0.0 I tried to set zoomFactor to 1.0 and I found that he is not valid Note: func dataScannerDidZoom(_ dataScanner: DataScannerViewController), when I try to get the minZoomFactor, set the zoomFactor in this proxy method, I find that it is valid! What should I do next, I want to use only DataScannerViewController and implement ultra wide angle Thanks a lot.
1
0
428
Jan ’25
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
8
0
761
Nov ’24
**Title:** Front-Facing Camera Rotation Matrix in ARKit: Consistency, Transformations, and `ARFrame.camera` Alignment
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit. Specifically, I need to understand: The exact rotation matrix applied to the front-facing camera's output in ARKit. Whether this matrix is consistent across all iPhone models or if there are variations. If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode. How this rotation matrix relates to the transform property of `ARFrame.camera
0
0
469
Oct ’24
Unable to Get Result from DetectHorizonRequest - Result is nil
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code: func processHorizonImage(_ ciImage: CIImage) async { let request = DetectHorizonRequest() do { let result = try await request.perform(on: ciImage) print(result) } catch { print(error) } } After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following: The input CIImage is valid and contains a visible horizon. No errors are being thrown. The relevant frameworks are properly imported. Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue. Thank you for your support! This is the image
0
0
470
Oct ’24
Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
3
0
720
Sep ’24