VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

Posts under VisionKit tag

44 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Vision Pro - lets join forces to improve VisionOS platform
Hi guys, if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback: app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro. when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way. 3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box. I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon. We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment. Well, that's just a beginning and I could go on and on and on, but tell me what you guys think. Regards and enjoy new virtual adventure! Robert
5
0
884
Mar ’24
Recognize Objects (Humans) with Apple Vision Pro Simulator
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
1
0
677
Mar ’24
When I enter immersive view, the window keeps getting pushed back.
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this? here is link of Gif:- https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
0
0
550
Jan ’24
adding Attachment View to the entity in reality composer
Hi, I looked Diorama example, and I wanted to do the same. Which is to have custom PointOfInterestComponent in the anchor object in Composer Pro. In the Scene, I iterates to find that tag with attachment( which I created when adding that scene), and each the attachment update: { content, attachments in viewModel.rootEntity?.scene?.performQuery(Self.runtimeQuery).forEach { entity in guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard let component = entity.components[PointOfInterestRuntimeComponent.self] else { return } guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard attachmentEntity.parent == nil else { return } attachmentEntity.setPosition([0.0, 0.4, 0], relativeTo: entity) entity.addChild(attachmentEntity, preservingWorldTransform: true) } This doesn't show attachment Entity, but if I do content.addChild( attachmentEntity ) instead of entity.addChild, it shows up. What could be wrong?
0
0
525
Jan ’24
Bugs with camera and tabview
Hello everyone, I don't know what to do with my problem. I have a barcode reader in my application which is solved via VisionKit. There are other pages in the bottom bar and they are resolved by TabView. The problem is that when I switch screens, my camera freezes. Does anyone know how to solve this? Thanks for the reply
2
0
601
Jan ’24
Vision Pro behavior on user movement
As per Apple Developer guidelines for Vision OS (https://developer.apple.com/design/human-interface-guidelines/immersive-experiences), If a person moves more than about a meter, the system automatically makes all displayed content translucent to help them navigate their surroundings. Here, what is intended by "translucent behavior"? Will the app content be fully invisible? Or displayed with some transparency?
0
0
532
Jan ’24
Vision Pro Dev Kit question
Hi guys, has any individual develper received Vision Pro dev kit or is it just aimed at big companies? Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project. After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do). Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected. Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project. Any info on when pre-orders might be possible? Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit? Thanks.
0
0
951
Dec ’23
Delegate methods of ImageAnalysisInteractionDelegate don't fire
I have a live text implementation on the following LiveTextImageView. However, after the view loads and the analyze code is run, none of the delegate methods fire when I interact with the Live View. Selecting text does not fire the textSelectionDidChange method, nor does highlightSelectedItemsDidChange fire when the live text button in the bottom right is pressed. I tried a few different implementations, including an approach where the delegate was defined on a separate class. I am running this on a iPhone 12 Pro I recently updated to 17.0.3. My goal is to be able to provide additional options to the user beyond the default live-text overlay options, after identifying when they have finished selecting text. // // LiveTextImageView.swift // import UIKit import SwiftUI import VisionKit class ImageAnalyzerWrapper { static let shared = ImageAnalyzer() private init() { } } struct LiveTextImageViewRepresentable: UIViewRepresentable { var image: UIImage func makeUIView(context: Context) -> LiveTextImageView { return LiveTextImageView(image: image) } func updateUIView(_ uiView: LiveTextImageView, context: Context) { } } class LiveTextImageView: UIImageView, ImageAnalysisInteractionDelegate, UIGestureRecognizerDelegate { var capturedSelectedText: String? let analyzer = ImageAnalyzerWrapper.shared let interaction = ImageAnalysisInteraction() init(image: UIImage) { super.init(frame: .zero) self.image = image let photoWrapper = PhotoWrapper(rawPhoto: image) let resizedPhoto = photoWrapper.viewportWidthCroppedPhoto(padding: 40) self.image = resizedPhoto self.contentMode = .scaleAspectFit self.addInteraction(interaction) interaction.preferredInteractionTypes = [] interaction.analysis = nil analyzeImage() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func analyzeImage() { if let image = self.image { Task { let configuration = ImageAnalyzer.Configuration([.text]) do { let analysis = try await analyzer.analyze(image, configuration: configuration) self.addInteraction(interaction) interaction.delegate = self interaction.analysis = analysis interaction.preferredInteractionTypes = .textSelection } catch { print("Error in live image handling") } } } } func interaction( _ interaction: ImageAnalysisInteraction, highlightSelectedItemsDidChange highlightSelectedItems: Bool) async { print("Highlighted items changed") } func interaction(_ interaction: ImageAnalysisInteraction, shouldBeginAt point: CGPoint, for interactionType: ImageAnalysisInteraction.InteractionTypes) async -> Bool { return interaction.hasInteractiveItem(at: point) || interaction.hasActiveTextSelection } func textSelectionDidChange(_ interaction: ImageAnalysisInteraction) async { print("Changed!") if #available(iOS 17.0, *) { capturedSelectedText = interaction.text print(capturedSelectedText ?? "") } } }
0
0
557
Oct ’23
Runtime crash on iOS16 when iOS17 framework is mentioned
Hi Everyone, I'm having a strange crash on App launch with iOS16 when I have a reference to an iOS17 only framework in my code. Even if I wrap the code in #available, I still get the crash on launch; and the code isn't even called yet... just the existence of it causes the crash. Pretty strange I thought? The framework is VisionKit, and the code that causes the crash is if #available(iOS 17, *) { // .imageSubject is iOS17 only - but this causes // a crash on launch in iOS16 even with the #available check interaction.preferredInteractionTypes = .imageSubject } The crash is: Referenced from: <91ED5216-D66C-3649-91DA-B31C0B55DDA1> /private/var/containers/Bundle/Application/78FD9C93-5657-4FF5-85E7-A44B60717870/XXXXXX.app/XXXXXX Expected in: <AF01C435-3C37-3C7C-84D9-9B5EA3A59F5C> /System/Library/Frameworks/VisionKit.framework/VisionKit Any thoughts anyone?? I know the .imageSubject is iOS17 only, but the #available should catch it - no? Any why does it crash immediatley on launch, when that code is not even called? Odd!
1
1
590
Oct ’23
iOS app crashed on iOS 16
Hi team, We have an iOS app. Since July 15, 2022, our users met a kind of app crash due to an invalid memory fetch. The time is when Apple released iOS 16 beta officially. After Sep 12, crash count started to increase drastically. The time is Apple released iOS 16 officially. Crash backtrace can be seen as follows. Thread 14 Crashed: 0 libsystem_platform.dylib 0x00000001f8810930 _platform_memmove + 96 1 CoreGraphics 0x00000001adb64104 CGDataProviderCreateWithCopyOfData + 20 2 CoreGraphics 0x00000001adb4cdb4 CGBitmapContextCreateImage + 172 3 VisionKitCore 0x00000001ed813f10 -[VKCRemoveBackgroundResult _createCGImageFromBGRAPixelBuffer:cropRect:] + 348 4 VisionKitCore 0x00000001ed813cc0 -[VKCRemoveBackgroundResult createCGImage] + 156 5 VisionKitCore 0x00000001ed8ab6f8 __vk_cgImageRemoveBackgroundWithDownsizing_block_invoke + 64 6 VisionKitCore 0x00000001ed881474 __63-[VKCRemoveBackgroundRequestHandler performRequest:completion:]_block_invoke.5 + 436 7 MediaAnalysisServices 0x00000001eec58968 __92-[MADService performRequests:onPixelBuffer:withOrientation:andIdentifier:completionHandler:]_block_invoke.38 + 400 8 CoreFoundation 0x00000001abff0a14 __invoking___ + 148 9 CoreFoundation 0x00000001abf9cf2c -[NSInvocation invoke] + 428 10 Foundation 0x00000001a6464d38 __NSXPCCONNECTION_IS_CALLING_OUT_TO_REPLY_BLOCK__ + 16 11 Foundation 0x00000001a64362fc -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 520 12 Foundation 0x00000001a6a10f44 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188 13 libxpc.dylib 0x00000001f89053e4 _xpc_connection_reply_callout + 124 14 libxpc.dylib 0x00000001f88f8580 _xpc_connection_call_reply_async + 88 15 libdispatch.dylib 0x00000001b340205c _dispatch_client_callout3 + 20 16 libdispatch.dylib 0x00000001b341ff58 _dispatch_mach_msg_async_reply_invoke + 344 17 libdispatch.dylib 0x00000001b340956c _dispatch_lane_serial_drain + 376 18 libdispatch.dylib 0x00000001b340a214 _dispatch_lane_invoke + 436 19 libdispatch.dylib 0x00000001b3414e10 _dispatch_workloop_worker_thread + 652 20 libsystem_pthread.dylib 0x00000001f88a4df8 _pthread_wqthread + 288 21 libsystem_pthread.dylib 0x00000001f88a4b98 start_wqthread + 8 Last but not the least. The users who met this kind of app crash use iOS16+. We think this crash is related to iOS 16 SDK. We're appreciate that you can provide some clues how to fix this kind of crash.
14
2
5.1k
Oct ’23
Apple Vision Pro - Showing Error
var accessibilityComponent = AccessibilityComponent() accessibilityComponent.isAccessibilityElement = true accessibilityComponent.traits = [.button, .playsSound] accessibilityComponent.label = "Cloud" accessibilityComponent.value = "Grumpy" cloud.components[AccessibilityComponent.self] = accessibilityComponent // ... var isHappy: Bool { didSet { cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy" } }
0
0
609
Sep ’23