Apply computer vision algorithms to perform a variety of tasks on input images and video using Vision.

Posts under Vision tag

90 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Inquiry about the production of visionos apps separate from the existing ios app store sales
Hello . I'm currently selling an app. I tried to run the ios version on visionos to provide the same service as the existing app, but a huge amount of incompatible code was generated. Can I create a visionos app with the same name, add visionos from the App Store connect site, and upload a completely separate app project file? The iOS app and the visionos app will use the same icloud, in-app payment, and so on. However, we plan to separate the project itself to reduce the size of the user's app itself and optimize it.
0
0
406
Feb ’24
Error loading ReferenceImage
Currently, I try to test the ImageTrackingProvider with the Apple Vision Pro. I started with some basic code: import RealityKit import ARKit @MainActor class ARKitViewModel: ObservableObject{ private let session = ARKitSession() private let imageTracking = ImageTrackingProvider(referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "AR")) func runSession() async { do{ try await session.run([imageTracking]) } catch{ print(error) } } func processUpdates() async { for await _ in imageTracking.anchorUpdates{ print("test") } } } I only have one picture in the AR folder. I added the size an I have no error messages in the AR folder. As I am trying to run the application with the vision Pro, I receive following error: ar_image_tracking_provider_t <0x28398f1e0>: Failed to load reference image <ARReferenceImage: 0x28368f120 name="IMG_1640" physicalSize=(1.350, 2.149)> with error: Failed to add reference image. It finds the image, but there seems to be a problem with the loading. I tried the jpeg and the png format. I do not understand why it fails to load the ReferenceImage. I use Xcode Version 15.3 beta 3
2
0
727
Feb ’24
SwiftData not working in VisionOS
I want to make icloud backup using SwiftData in VisionOS and I need to use SwiftData first but I get the following error even though I do the following steps I followed the steps below I created a Model import Foundation import SwiftData @Model class NoteModel { @Attribute(.unique) var id: UUID var date:Date var title:String var text:String init(id: UUID = UUID(), date: Date, title: String, text: String) { self.id = id self.date = date self.title = title self.text = text } } I added modelContainer WindowGroup(content: { NoteView() }) .modelContainer(for: [NoteModel.self]) And I'm making inserts to test import SwiftUI import SwiftData struct NoteView: View { @Environment(\.modelContext) private var context var body: some View { Button(action: { // new Note let note = NoteModel(date: Date(), title: "New Note", text: "") context.insert(note) }, label: { Image(systemName: "note.text.badge.plus") .font(.system(size: 24)) .frame(width: 30, height: 30) .padding(12) .background( RoundedRectangle(cornerRadius: 50) .foregroundStyle(.black.opacity(0.2)) ) }) .buttonStyle(.plain) .hoverEffectDisabled(true) } } #Preview { NoteView().modelContainer(for: [NoteModel.self]) }
0
0
412
Feb ’24
How to make the Swift UI window display the frosted glass effect normally in the metal immersive space
I used metal and CompositorLayer to render an immersive space skybox. In this space, the window created by the Swift UI I created only displays the gray frosted glass background effect (it seems to ignore the metal-rendered skybox and only samples and displays the black background). why is that? Is there any solution to display the normal frosted glass background? Thank you very much!
1
0
617
Mar ’24
Object Detection using Vision performs different than in Create ML Preview
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
0
0
583
Mar ’24
How to show only Spatial video using PHPickerFilter in swift 5.0
I want to get only spatial video while open the Photo library in my app. How can I achieve? One more thing, If I am selecting any video using photo library then how to identify selected video is Spatial Video or not? self.presentPicker(filter: .videos) /// - Tag: PresentPicker private func presentPicker(filter: PHPickerFilter?) { var configuration = PHPickerConfiguration(photoLibrary: .shared()) // Set the filter type according to the user’s selection. configuration.filter = filter // Set the mode to avoid transcoding, if possible, if your app supports arbitrary image/video encodings. configuration.preferredAssetRepresentationMode = .current // Set the selection behavior to respect the user’s selection order. configuration.selection = .ordered // Set the selection limit to enable multiselection. configuration.selectionLimit = 1 let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true) } `func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { picker.dismiss(animated: true) { // do something on dismiss } guard let provider = results.first?.itemProvider else {return} provider.loadFileRepresentation(forTypeIdentifier: "public.movie") { url, error in guard error == nil else{ print(error) return } // receiving the video-local-URL / filepath guard let url = url else {return} // create a new filename let fileName = "\(Int(Date().timeIntervalSince1970)).\(url.pathExtension)" // create new URL let newUrl = URL(fileURLWithPath: NSTemporaryDirectory() + fileName) print(newUrl) print("===========") // copy item to APP Storage //try? FileManager.default.copyItem(at: url, to: newUrl) // self.parent.videoURL = newUrl.absoluteString } }`
1
0
528
Mar ’24
Visionos upload error Profile doesn't include the com.apple.application-identifier entitlement.
Currently, I'm reducing the ios version being sold on the App Store with the same app id and identifier to visionos, and I'm trying to upload it using the same identifier, developer id, and icloud, but I can't upload it because of an error. I didn't know what the problem was, so I updated the update of the ios version early, but I uploaded the review without any problems. Uploading the visionos single version has a profile problem, so I would appreciate it if you could tell me the solution. In addition, a lot of the code used in the ios version is not compatible with visionos, so we have created a new project for visionos.
0
0
398
Mar ’24
One app Multi-platform. The concern of a novice developer
Hello . Currently, only the ios version is on sale on the App Store. The application is offering an icloud-linked, auto-renewable subscription. I want to sell to the app store connect with the same identifier, AppID at the same time. I simply added visionos to the existing app project to provide the visionos version early, but the existing UI-related code and the location-related code are not compatible. We used the same identifier with the same name, duplicated and optimized only what could be implemented, and created it without any problems on the actual device. However, when I added the visionos platform to the App Store cennect and tried to upload it through the archive in the app for visionos that I created as an addition, there was an error in the identifier and provisioning, so the upload was blocked. The result of looking up to solve the problem App Group -I found out about the function, but it was judged that a separate app was for an integrated service, so it was not suitable for me. Add an APP to an existing app project via target and manually adjust the platform in Xcode -> Build Phases -> Compile Soures -> Archive upload success?( I haven't been able to implement this stage of information yet.) I explained the current situation. Please give me some advice on how to implement it.visionos has a lot of constraints, so you need to take a lot of features off.
0
0
411
Mar ’24
What is the maximum data processing speed?
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction. Distance = ? Direction = ? For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics. Question: What is the maximum number of frames processed with custom object recognition? If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
0
0
425
Apr ’24
Reading 12K panoramic images system API read error, VisionOS does not support 12K panoramic photos view
xtension Entity { func addPanoramicImage(for media: WRMedia) { let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } func updateRotation(for media: WRMedia) { let angle = Angle.degrees( 0.0) let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0)) self.transform.rotation = rotation } struct WRSubscribeComponent: Component { var subscription: AnyCancellable } } case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
1
0
427
Apr ’24