Post not yet marked as solved
This video was meant: https://developer.apple.com/wwdc21/10036?time=464
Post not yet marked as solved
Does anyone have a ready-made script/shortcut like the one shown in the video?
Post not yet marked as solved
I am testing on a physical playground on my iPad :) Do you know how to fix it? Could you describe the error you are encountering so that we can help? Does the app throw a specific error / show a warning in Console?
Post not yet marked as solved
Thank you I'll take a look at that link. In general a suite of APIs similar to https://developer.apple.com/documentation/vision/identifying_3d_human_body_poses_in_images but in real time from visionOS data
Hello, There is no API that gives you access to a visionOS camera stream, and there is no API that detects body poses in the scene for you, so I suggest that you file an enhancement request using Feedback Assistant to request API that would enable the feature you are trying to build in your app!
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc. https://developer.apple.com/videos/play/wwdc2023/111241/ It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs? All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this? Appreciate any guidance! Thanks.
Post not yet marked as solved
I also previously had a problem using the Vision Framework in Playground.(Doesn't work well in preview or simulator) I think it will work if you test it on an actual device.
Post not yet marked as solved
Hi Developers, I want to create a Vision app on Swift Playgrounds on iPad. However, Vision does not properly function on Swift Playgrounds on iPad or Xcode Playgrounds. The Vision code only works on a normal Xcode Project. SO can I submit my Swift Student Challenge 2024 Application as a normal Xcode Project rather than Xcode Playgrounds or Swift Playgrounds File. Thanks :)
Post not yet marked as solved
Is there a way to determine finger joint/root circumference, finger length, tip of finger to wrist crease, hand breadth and wrist breadth with Vision hand pose? Or alternative method? Any insight is appreciated.
Post not yet marked as solved
Hello, I am Pieter Bikkel. I study Software Engineering at the HAN, University of Applied Sciences, and I am working on an app that can recognize volleyball actions using Machine Learning. A volleyball coach can put an iPhone on a tripod and analyze a volleyball match. For example, where the ball always lands in the field, how hard the ball is served. I was inspired by this session and wondered if I could interview one of the experts in this field. This would allow me to develop my App even better. I hope you can help me with this.
Post not yet marked as solved
Can you share the source code for the demo of the Vision Face Detector with the metrics (roll, yaw and pitch) displayed? You provide some code online but not for this portion of the presentation.
Post not yet marked as solved
I am looking for the examples demo'd by Frank in session wwdc21-10041. I don't seem to find it anywhere. Any lead is appreciated.
Post not yet marked as solved
First of all this vision api is amazing. the OCR is very accurate. I've been looking to multiprocess using the vision API. I have about 2 million PDFs I want to OCR, and I want to run multiple threads/run parallel processing to OCR each. I tried pyobjc but it does not work so well. Any suggestions on tackling this problem?
Post not yet marked as solved
I'm referring to this talk: https://developer.apple.com/videos/play/wwdc2021/10152 I was wondering if the code for the Image composition project he demonstrates at the end of the talk (around 24:00) is available somewhere? Would much appreciate any help.
Post not yet marked as solved
Hello, Is there an API available for Visual Look Up? https://support.apple.com/en-gb/guide/iphone/iph21c29a1cf/ios