VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

Posts under VisionKit tag

44 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Recognize Objects (Humans) with Apple Vision Pro Simulator
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
1
0
676
Mar ’24
When I enter immersive view, the window keeps getting pushed back.
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this? here is link of Gif:- https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
0
0
549
Jan ’24
adding Attachment View to the entity in reality composer
Hi, I looked Diorama example, and I wanted to do the same. Which is to have custom PointOfInterestComponent in the anchor object in Composer Pro. In the Scene, I iterates to find that tag with attachment( which I created when adding that scene), and each the attachment update: { content, attachments in viewModel.rootEntity?.scene?.performQuery(Self.runtimeQuery).forEach { entity in guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard let component = entity.components[PointOfInterestRuntimeComponent.self] else { return } guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard attachmentEntity.parent == nil else { return } attachmentEntity.setPosition([0.0, 0.4, 0], relativeTo: entity) entity.addChild(attachmentEntity, preservingWorldTransform: true) } This doesn't show attachment Entity, but if I do content.addChild( attachmentEntity ) instead of entity.addChild, it shows up. What could be wrong?
0
0
523
Jan ’24
Bugs with camera and tabview
Hello everyone, I don't know what to do with my problem. I have a barcode reader in my application which is solved via VisionKit. There are other pages in the bottom bar and they are resolved by TabView. The problem is that when I switch screens, my camera freezes. Does anyone know how to solve this? Thanks for the reply
2
0
600
Jan ’24
Vision Pro behavior on user movement
As per Apple Developer guidelines for Vision OS (https://developer.apple.com/design/human-interface-guidelines/immersive-experiences), If a person moves more than about a meter, the system automatically makes all displayed content translucent to help them navigate their surroundings. Here, what is intended by "translucent behavior"? Will the app content be fully invisible? Or displayed with some transparency?
0
0
531
Jan ’24
Visionkit can lift a subject. But the bounding rectangle is always returning x,y,width,height values as 0,0,0,0
In our app, we needed to use visionkit framework to lift up the subject from an image and crop it. Here is the piece of code: if #available(iOS 17.0, *) { let analyzer = ImageAnalyzer() let analysis = try? await analyzer.analyze(image, configuration: self.visionKitConfiguration) let interaction = ImageAnalysisInteraction() interaction.analysis = analysis interaction.preferredInteractionTypes = [.automatic] guard let subject = await interaction.subjects.first else{ return image } let s = await interaction.subjects print(s.first?.bounds) guard let cropped = try? await subject.image else { return image } return cropped } But the s.first?.bounds always returns a cgrect with all 0 values. Is there any other way to get the position of the cropped subject? I need the position in the image from where the subject was cropped. Can anyone help?
1
0
614
May ’24
Vision Pro Dev Kit question
Hi guys, has any individual develper received Vision Pro dev kit or is it just aimed at big companies? Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project. After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do). Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected. Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project. Any info on when pre-orders might be possible? Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit? Thanks.
0
0
951
Dec ’23
Delegate methods of ImageAnalysisInteractionDelegate don't fire
I have a live text implementation on the following LiveTextImageView. However, after the view loads and the analyze code is run, none of the delegate methods fire when I interact with the Live View. Selecting text does not fire the textSelectionDidChange method, nor does highlightSelectedItemsDidChange fire when the live text button in the bottom right is pressed. I tried a few different implementations, including an approach where the delegate was defined on a separate class. I am running this on a iPhone 12 Pro I recently updated to 17.0.3. My goal is to be able to provide additional options to the user beyond the default live-text overlay options, after identifying when they have finished selecting text. // // LiveTextImageView.swift // import UIKit import SwiftUI import VisionKit class ImageAnalyzerWrapper { static let shared = ImageAnalyzer() private init() { } } struct LiveTextImageViewRepresentable: UIViewRepresentable { var image: UIImage func makeUIView(context: Context) -> LiveTextImageView { return LiveTextImageView(image: image) } func updateUIView(_ uiView: LiveTextImageView, context: Context) { } } class LiveTextImageView: UIImageView, ImageAnalysisInteractionDelegate, UIGestureRecognizerDelegate { var capturedSelectedText: String? let analyzer = ImageAnalyzerWrapper.shared let interaction = ImageAnalysisInteraction() init(image: UIImage) { super.init(frame: .zero) self.image = image let photoWrapper = PhotoWrapper(rawPhoto: image) let resizedPhoto = photoWrapper.viewportWidthCroppedPhoto(padding: 40) self.image = resizedPhoto self.contentMode = .scaleAspectFit self.addInteraction(interaction) interaction.preferredInteractionTypes = [] interaction.analysis = nil analyzeImage() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func analyzeImage() { if let image = self.image { Task { let configuration = ImageAnalyzer.Configuration([.text]) do { let analysis = try await analyzer.analyze(image, configuration: configuration) self.addInteraction(interaction) interaction.delegate = self interaction.analysis = analysis interaction.preferredInteractionTypes = .textSelection } catch { print("Error in live image handling") } } } } func interaction( _ interaction: ImageAnalysisInteraction, highlightSelectedItemsDidChange highlightSelectedItems: Bool) async { print("Highlighted items changed") } func interaction(_ interaction: ImageAnalysisInteraction, shouldBeginAt point: CGPoint, for interactionType: ImageAnalysisInteraction.InteractionTypes) async -> Bool { return interaction.hasInteractiveItem(at: point) || interaction.hasActiveTextSelection } func textSelectionDidChange(_ interaction: ImageAnalysisInteraction) async { print("Changed!") if #available(iOS 17.0, *) { capturedSelectedText = interaction.text print(capturedSelectedText ?? "") } } }
0
0
557
Oct ’23
Runtime crash on iOS16 when iOS17 framework is mentioned
Hi Everyone, I'm having a strange crash on App launch with iOS16 when I have a reference to an iOS17 only framework in my code. Even if I wrap the code in #available, I still get the crash on launch; and the code isn't even called yet... just the existence of it causes the crash. Pretty strange I thought? The framework is VisionKit, and the code that causes the crash is if #available(iOS 17, *) { // .imageSubject is iOS17 only - but this causes // a crash on launch in iOS16 even with the #available check interaction.preferredInteractionTypes = .imageSubject } The crash is: Referenced from: <91ED5216-D66C-3649-91DA-B31C0B55DDA1> /private/var/containers/Bundle/Application/78FD9C93-5657-4FF5-85E7-A44B60717870/XXXXXX.app/XXXXXX Expected in: <AF01C435-3C37-3C7C-84D9-9B5EA3A59F5C> /System/Library/Frameworks/VisionKit.framework/VisionKit Any thoughts anyone?? I know the .imageSubject is iOS17 only, but the #available should catch it - no? Any why does it crash immediatley on launch, when that code is not even called? Odd!
1
1
590
Oct ’23
Apple Vision Pro - Showing Error
var accessibilityComponent = AccessibilityComponent() accessibilityComponent.isAccessibilityElement = true accessibilityComponent.traits = [.button, .playsSound] accessibilityComponent.label = "Cloud" accessibilityComponent.value = "Grumpy" cloud.components[AccessibilityComponent.self] = accessibilityComponent // ... var isHappy: Bool { didSet { cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy" } }
0
0
608
Sep ’23
Turn physical surface into touchscreen in VisionOS
In VisionOS is it possible to detect when a user is touching a physical surface in the real world and also to project 2D graphics on that surface? So imagine a windowless 2D app that is projected onto a surface, essentially turning a physical wall, table, etc. into a giant touchscreen? So kinda like this: https://appleinsider.com/articles/23/06/23/vision-pro-will-turn-any-surface-into-a-display-with-touch-control But I want every surface in the room to be touchable and be able to display 2D graphics on the face of that surface and not floating in space. So essentially turning every physical surface in the room into a UIView. Thanks!
2
0
726
Sep ’23
How to gain full control over Apple Vision pro's display and render 2D graph plot on it
How can I achieve full control over Vision Pro's display and effectively render a 2D graph plot on it? I would appreciate guidance on the necessary steps or code snippets. P.s. As per Apple documentation For a more immersive experience, an app can open a dedicated Full Space where only that app’s content will appear. This still does not fulfill the 'flat bounded 2D' requirement as the Spaces provide an unbounded 3D immersive view.
2
0
713
Jul ’23