[tags:machine learning,vision]

102 results found

Post not yet marked as solved
1 Replies
This video was meant: https://developer.apple.com/wwdc21/10036?time=464
Post not yet marked as solved
7 Replies
824 Views
Hi Developers, I want to create a Vision app on Swift Playgrounds on iPad. However, Vision does not properly function on Swift Playgrounds on iPad or Xcode Playgrounds. The Vision code only works on a normal Xcode Project. SO can I submit my Swift Student Challenge 2024 Application as a normal Xcode Project rather than Xcode Playgrounds or Swift Playgrounds File. Thanks :)
Posted Last updated
.
Post not yet marked as solved
7 Replies
I am testing on a physical playground on my iPad :) Do you know how to fix it? Could you describe the error you are encountering so that we can help? Does the app throw a specific error / show a warning in Console?
Post not yet marked as solved
2 Replies
Thank you I'll take a look at that link. In general a suite of APIs similar to https://developer.apple.com/documentation/vision/identifying_3d_human_body_poses_in_images but in real time from visionOS data
Post marked as solved
2 Replies
703 Views
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
Posted
by nkarpov.
Last updated
.
Post marked as solved
2 Replies
Hello, There is no API that gives you access to a visionOS camera stream, and there is no API that detects body poses in the scene for you, so I suggest that you file an enhancement request using Feedback Assistant to request API that would enable the feature you are trying to build in your app!
Post not yet marked as solved
7 Replies
I also previously had a problem using the Vision Framework in Playground.(Doesn't work well in preview or simulator) I think it will work if you test it on an actual device.
Post not yet marked as solved
2 Replies
2k Views
I downloaded the sample code from the WWDC 2022 session Counting human body action repetitions in a live video feed and ran it on my new iPhone SE (which has an A15 Bionic chip). Unfortunately, this sample project (whose action repetition counter was mentioned multiple times during WWDC, was extremely inconsistent in tracking reps. It rarely worked for me, which was disappointing because I was really excited about this functionality. I'd like to use this action repetition counting in an app of my own, it would be very useful if it worked, but I'm skeptical after struggling to get Apple's sample app to accurately count reps. Does anyone have any suggestions for getting this sample project or action repetition counting in general, to accurately work? Any help would be really appreciated, thanks!
Posted Last updated
.
Post not yet marked as solved
2 Replies
Have you had any update on this? We'd like to hear from you. If possible, pls do a screen recording and file a bug via Feedback Assistant, so we can investigate. btw: this sample code was tagged as Vision instead of CreateML, which is why we could not respond in time
Post not yet marked as solved
2 Replies
found a solution https://betterprogramming.pub/how-to-build-a-yolov5-object-detection-app-on-ios-39c8c77dfe58 git clone https://github.com/hietalajulius/yolov5 python export-nms.py --include coreml --weights yolov5n.pt
Post not yet marked as solved
2 Replies
1.4k Views
I'm training a machine learning model in PyTorch using YOLOv5 from Ultralytics. CoreMLTools from Apple is used to convert the PyTorch (.pt) model into a CoreML model (.mlmodel). This works fine, and I can use it in my iOS App, but I have to access the prediction output of the Model manually. The output shape of the model is MultiArray : Float32 1 × 25500 × 46 array. From the VNCoreMLRequest I receive only VNCoreMLFeatureValueObservation from this I can get the MultiArray and iterate through it, to find the data I need. But I see that Apple offers for Object Detection models VNRecognizedObjectObservation type, which is not returned for my model. What is the reason why my model is not supported to return the VNRecognizedObjectObservation type? Can I use CoreMLTools to enable it?
Posted Last updated
.
Post not yet marked as solved
2 Replies
Thank you very much. Is any information about how to translate from vision bounding box cgrect to UIKit coordinate space in objective c? I didn't found any page with information. In the tutorial that you attach there are some function en swift that i can't translate to objective c. This one: let rectangles = boxesAndPayload.map { $0.box } .map { CGRect(origin: $0.origin.translateFromCoreImageToUIKitCoordinateSpace(using: image.size.height), size: $0.size) } Thank you for your time
Post not yet marked as solved
2 Replies
867 Views
Hi, I have seen this video: https://developer.apple.com/videos/play/wwdc2021/10041/ and in my project i am trying to draw detected barcodes. I am using Vision framework and i have the barcode position in boundingBox parameter, but i dont understand cgrect of that parameter. I am programming in objective c and i don't see resources, and for more complication i have not an image, i am capturing barcodes from video camera sesion. for parts: 1-how can i draw detected barcode like in the video (from an image). 2-how can i draw detected barcode in capturesession. I have used VNImageRectForNormalizedRect to pass from normalized to pixel, but the result is not correct. thank you very much.
Posted Last updated
.