VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

Posts under VisionKit tag

48 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
6
0
178
20h
**Title:** Front-Facing Camera Rotation Matrix in ARKit: Consistency, Transformations, and `ARFrame.camera` Alignment
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit. Specifically, I need to understand: The exact rotation matrix applied to the front-facing camera's output in ARKit. Whether this matrix is consistent across all iPhone models or if there are variations. If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode. How this rotation matrix relates to the transform property of `ARFrame.camera
0
0
249
3w
Unable to Get Result from DetectHorizonRequest - Result is nil
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code: func processHorizonImage(_ ciImage: CIImage) async { let request = DetectHorizonRequest() do { let result = try await request.perform(on: ciImage) print(result) } catch { print(error) } } After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following: The input CIImage is valid and contains a visible horizon. No errors are being thrown. The relevant frameworks are properly imported. Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue. Thank you for your support! This is the image
0
0
207
Oct ’24
Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
3
0
421
Sep ’24
VisionFramework does not work with VisionOS2.0
I try vision frameworks with VisionPro but does not work only with VisionOS2.0. When I perform requests, do not work and below error is caught. I try same code with VisionOS1.2, iOS18.0beta it works. I try also new beta API but does not work and same error. ex.GenerateForegroundInstanceMaskRequest do you have any idea? is it any permission for use vision framework with visionOS2.0. This is my try list with VisionOS2.0beta4 GenerateForegroundInstanceMaskRequest (not work error1) VNGenerateForegroundInstanceMaskRequest(not work error1) VNRecognizeTextRequest (not work error2) with VisionOS1.2 VNRecognizeTextRequest (work) with iOS 18beta GenerateForegroundInstanceMaskRequest (work) My Development Env Env1 VisionPro: VIsionOS2.0beta4 Xcode: 16.0beta4,16.0beta2. macOS: 14.5(23F79) Env2 VisionPro: VIsionOS1.2. Xcode: 15.4 macOS: 14.5(23F79). Error1 Error Domain=com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)" UserInfo={NSLocalizedDescription=Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)} Error2 Error Domain=com.apple.Vision Code=11 "VNRecognizeTextRequest produced an internal error" UserInfo={NSLocalizedDescription=VNRecognizeTextRequest produced an internal error, NSUnderlyingError=0x3001f6850 {Error Domain=CRImageReaderErrorDomain Code=-5 "Unknown error" UserInfo={NSLocalizedDescription=Unknown error}}}
8
0
715
Sep ’24
VisionKit crashes on iOS 16.4.
App crashes on iOS 16.4 when there is usage for ImageAnalysisInteraction api from VisionKit. App crashes before even starts. Here is output: dyld[3240]: Symbol not found: _$s9VisionKit24ImageAnalysisInteractionC7subject2atAC7SubjectVSgSo7CGPointV_tYaFTu Referenced from: <BAD7A699-FB4E-3D0E-8CD4-45CC9FC3D5E5> /Users/sereza/Library/Developer/CoreSimulator/Devices/B64EAF39-0DD9-49EC-A3F7-69675C94B8BE/data/Containers/Bundle/Application/F4E30E86-ED4D-4748-AB99-434208D55483/VisionKitChecker.app/VisionKitChecker Expected in: <F05E3A17-D74A-3EE2-BC8D-DDCC23E48707> /Library/Developer/CoreSimulator/Volumes/iOS_20E247/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 16.4.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/VisionKit.framework/VisionKit Here is enough code to produce this crash. Please note that this code never gets called. It is enough that it exists in the project: import VisionKit @MainActor final class LiftHelper: ObservableObject { func doSomething() async throws { let interaction = ImageAnalysisInteraction() let _ = try await interaction.image(for: []) } }
3
1
443
Jul ’24
Vision Pro OS file location
I would like to know what is the global path of the Vision Pro file system. For instance, if I put a file called example.pdf inside "On My Apple Vision Pro" what would be the global path for that file? "On My Apple Vision Pro/user_name/example.pdf" or "/example.pdf" or "/username/example.pdf" and so on. I tried to search about it but I didn't found no official source about it. Thanks in advance!
1
0
609
Jul ’24
Can you match a new photo with existing images?
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app. I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models. I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app. Does anyone know if this is possible with frameworks as Vision or VisionKit?
2
0
752
Jul ’24
Capture Video from my own app using enterprise APIs in visionOS
Hello, I want to capture video from Vision Pro in the Vision OS app. I am referring to the (https://developer.apple.com/videos/play/wwdc2024/10139/) Apple video and their code. step like below import ARKit com.apple.developer.arkit.main-camera-access.allow = true in info.plist Do below code func loadCameraFeed() async { // Main Camera Feed Access Example let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } print(cameraFrameUpdates) for await cameraFrame in cameraFrameUpdates { print(cameraFrame) guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } pixelBuffer = mainCameraSample.pixelBuffer } } I want to convert "pixelBuffer" into video streaming and show it in a frame like iOS. Please guide me on how to achieve my next step. I am blank after this code.
1
0
864
Jun ’24