The "Cancel" button for VNDocumentCameraViewController is not displayed on iPadOS 26. This issue appears to be specific to iPad, as the button appears correctly on iPhone.
VisionKit
RSS for tagScan documents with the camera on iPhone and iPad devices using VisionKit.
Posts under VisionKit tag
30 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Prerequisite: After the MDM APP issues the command, the camera on the phone is no longer visible (unusable).
After upgrading to iOS 26.1, the isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method keeps returning true when the camera is unavailable.
The isSourceTypeAvailable: UIImagePickerControlSourceTypeCamera method on iOS 26.0.1 is normal, returning false when the camera is unavailable and true when it is available.
We're observing several UI issues with VNDocumentCameraViewController on devices running iOS 26. These screens were functioning correctly in earlier iOS versions.
Issue 1 - On the edge correction screen, the top bar now appears as a gray strip beneath the status bar, whereas in previous iOS versions, it was positioned at the bottom of the screen. Do we have any workarounds to address this issue?
Issue2 - The edit buttons and their labels are not clearly visible, affecting usability.
Im using XCode 16.4 to build to iOS26 and the usage is like below:
`let scanner = VNDocumentCameraViewController()
scanner.delegate = self
self.present(scanner, animated: true)`
Hi,
I'm trying to use the new RecognizeDocumentsRequest from the Vision Framework to read a receipt. It looks very promising by being able to read paragraphs, lines and detect data. So far it unfortunately seems to read every line on the receipt as a paragraph and when there is more space on one line it creates two paragraphs.
Is there perhaps an Apple Engineer who knows if this is expected behaviour or if I should file a Feedback for this?
Code setup:
let request = RecognizeDocumentsRequest()
let observations = try await request.perform(on: image)
guard let document = observations.first?.document else {
return
}
for paragraph in document.paragraphs {
print(paragraph.transcript)
for data in paragraph.detectedData {
switch data.match.details {
case .phoneNumber(let data):
print("Phone: \(data)")
case .postalAddress(let data):
print("Postal: \(data)")
case .calendarEvent(let data):
print("Calendar: \(data)")
case .moneyAmount(let data):
print("Money: \(data)")
case .measurement(let data):
print("Measurement: \(data)")
default:
continue
}
}
}
See attached image as an example of a receipt I'd like to parse. The top 3 lines are the name, street, and postal code + city. These are all separate paragraphs. Checking on detectedData does see the street (2nd line) as PostalAddress, but not the complete address. Might that be a location thing since it's a Dutch address.
And lower on the receipt it sees the block with "Pomp 1 95 Ongelood" and the things below also as separate paragraphs. First picking up the left side and after that the right side. So it's something like this:
*
Pomp 1
Volume
Prijs
€
TOTAAL
*
BTW
Netto
21.00 %
95 Ongelood
41,90 l
1.949/ 1
81.66
€
14.17
67.49
The design of VNDocumentCameraViewController has been updated with iOS 26. Now the 'done' button that appears when at least one page has been scanned is by default in blue (liquid glass).
Changing the navBar or the barButtonItems tintColor does not work.
So, how does one change the color of this button to match the app's color?
Thank you very much!
Description
We observed multiple UI issues in VisionKit (VNDocumentCameraViewController) on iPad devices running iPadOS 26 (from Public Beta 4 onwards):
Tab bar button titles are not properly visible due to color/contrast issues.
An extra back button appears in the navigation bar when editing a captured image in landscape mode.
These issues seem to be iPadOS 26 bugs, as Apple does not provide public APIs to customize or override VNDocumentCameraViewController. VisionKit relies on private ICDocCam* classes, which are not accessible for modification.
Steps to Reproduce
Open the app on an iPad running iPadOS 26 (Public Beta 4 or later).
Switch the device to landscape mode.
Launch document scanning using VNDocumentCameraViewController.
Capture a document and tap Keep Scan.
Go to the edit captured image screen.
Observed Behavior:
Tab bar button titles are not clearly visible (color/contrast issue).
An extra back button is displayed in the navigation bar.
How to obtain the physical memory size of VisionPro and how much memory is currently available
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv)
Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into:
Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.)
My question:
How does this view controller determine whether scanning is available?
Is there a certain capability the available AVCaptureDevice's need to support maybe?
Any direction would be helpful for me to make this work for developers, making them build apps faster!
In Apple Vision Pro, I want to implement a HUD page similar to the one in Medivis' SuricalAR product (i.e. the UI is fixed on the screen field of view rather than in space). How should I do it?
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12.
In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does?
Ignore this if it's a no: is this handled behind the scenes or by the developer?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Machine Learning
VisionKit
Apple Intelligence
Hi All I have some problem when I using the IOS 18.4.1
I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1
I tried to following sample code.
However, when I run the app around 30 seconds to 1 minutes, the application would be crashed
When I using another Ipad with IOS 17, it would not have the same problem.
https://developer.apple.com/documentation/createml/creating-an-action-classifier-model
https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
Hi, DataScannerViewController does't recognize currencies less than 1.00 (e.g. 0.59 USD, 0.99 EUR, etc.). Why? How to solve the problem?
This feature is not described in Apple documentation, is there a solution?
This is my code:
func makeUIViewController(context: Context) -> DataScannerViewController {
let dataScanner = DataScannerViewController(recognizedDataTypes: [ .text(textContentType: .currency)])
return dataScanner
}
Description:
I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden.
I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience.
Implementation Details:
My app has three main components:
A main content window showing panorama thumbnails
A 3D globe window (volumetric) showing locations
An immersive space for viewing 360° panoramas
I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible.
Relevant Code:
@main
struct Travel_ImmersiveApp: App {
@StateObject private var appModel = AppModel()
@State private var panoImageView: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(appModel)
}
.windowStyle(.automatic)
.defaultSize(width: 1280, height: 825)
WindowGroup(id: "Earth") {
Globe3DView()
.environmentObject(appModel)
.onAppear {
appModel.isGlobeWindowOpen = true
appModel.globeWindowOpen = true
}
.onDisappear {
if !appModel.shouldCloseApp {
appModel.handleGlobeWindowClose()
}
}
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveView") {
ImmersiveView()
.environmentObject(appModel)
}
.immersionStyle(selection: $panoImageView, in: .full)
}
}
Opening the Immersive Space:
func getPanoImageAndOpenImmersiveSpace() async {
appModel.clearMemoryCache()
do {
let canView = appModel.canViewImage(image)
if canView {
let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in
Task { @MainActor in
cardState = .loading(progress: progress)
}
}
await MainActor.run {
appModel.updateCurrentImage(image, panoramaImage: downloadedImage)
}
if !appModel.immersiveSpaceOpened {
try await openImmersiveSpace(id: "ImmersiveView")
await MainActor.run {
appModel.immersiveSpaceOpened = true
cardState = .normal
}
} else {
await MainActor.run {
appModel.updateImmersiveView = true
cardState = .normal
}
}
} else {
await MainActor.run {
appModel.errorMessage = "You do not have permission to view this image."
cardState = .normal
}
}
} catch {
// Error handling
}
}
Immersive View Implementation:
struct ImmersiveView: View {
@EnvironmentObject var appModel: AppModel
var body: some View {
RealityView { content in
let rootEntity = Entity()
content.add(rootEntity)
Task {
if let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage) {
await loadPanorama(for: rootEntity)
}
}
} update: { content in
if appModel.updateImmersiveView,
let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage),
let rootEntity = content.entities.first {
Task {
await loadPanorama(for: rootEntity)
appModel.updateImmersiveView = false
}
}
}
.onAppear {
print("ImmersiveView appeared")
}
.onDisappear {
appModel.resetImmersiveState()
}
}
// loadPanorama implementation...
}
What I've Tried
Set immersionStyle to .full as recommended in the documentation
Confirmed that the immersive space is properly opened and displaying panoramas
Verified that the state management for the immersive space is working correctly
Questions
How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden?
Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows?
Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this?
Any guidance or sample code would be greatly appreciated. Thank you!
Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
Dear Apple Developer Team,
I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController.
Currently, Vision supports several GS1 DataBar formats, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it.
Why This Feature Matters:
Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit.
Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology.
Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs.
Feature Request:
Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in:
VNBarcodeSymbology (for Vision framework)
DataScannerViewController (for VisionKit)
This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries.
I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed.
Thank you for your time and support!
Best regards
Hello,
I am currently developing an application that requires barcode scanning using Apple’s Vision framework (VNBarcodeSymbology). I noticed that the framework supports several GS1 DataBar symbologies, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, I could not find any explicit reference to support for GS1 DataBar Stacked (both regular and expanded variants).
Could you confirm whether GS1 DataBar Stacked is currently supported in VisionKit's DataScannerViewController or VNBarcodeObservation? If not, are there any plans to include support for this symbology in a future iOS update?
This functionality is critical for my use case, as GS1 DataBar Stacked barcodes are widely used in retail, pharmaceuticals, and logistics, where space constraints prevent the use of standard GS1 DataBar formats.
I appreciate any clarification on this matter and would be happy to provide additional details if needed.
I am creating an application that uses VNDetectBarcodesRequest to read QR codes from images and adjust the image orientation to match that of the QR code finder pattern.
The QR code was successfully read, and the coordinates of the QR code were obtained.Upon checking the obtained topLeft, topRight, and bottomLeft coordinates, they always seem to match the topLeft, topRight, and bottomLeft coordinates of the finder pattern.
Is it specified that the coordinates of topLeft, topRight, and bottomLeft obtained with VNDetectBarcodesRequest match the topLeft, topRight, and bottomLeft of the finder pattern? Or do they just happen to match?
I would appreciate it if you could tell me if the matching of coordinates is a specification.
Thank you for your help.
I would like to integrate the object capture API with a ML model for analysis. So, i will need to get the current frame into CG images for further process.
Thanks in advance !
Hi Apple Developers!
I’m using the DetectBarcodesRequest function to identify QR codes in some images and PDFs.
However, I’m facing an issue where the function doesn’t detect the barcode on certain documents and machines, while it works on others using the same document.
The only common factor I’ve noticed is that the machines that successfully identify the QR code of the “problematic” document are all heavy developer machines that have Xcode installed. Interestingly, this doesn’t seem to be related to processor type (Intel vs. Apple Silicon).
Could you please provide some guidance or leads on how to resolve this issue?