Post not yet marked as solved
You can take a look at this article, I hope it can help you. Barcode detection using Vision framework https://cornerbit.tech/barcode-detection-using-vision-framework/ Some screenshots of the article:
Post not yet marked as solved
Here are all the sounds supported in the beta version of Xcode. list of sounds
Post not yet marked as solved
Where can I find a comprehensive list of all the classes that the built in Sound Classifier model supports?
Post not yet marked as solved
download code for wwdc21-10041
Post not yet marked as solved
If you are asking where to get the code snippets associated with that video, they are available in the Apple Developer app. Navigate to the video in the app, and select the Code tab to see the snippets.
Post not yet marked as solved
Hello! I was wondering if it would be possible for the sample code for the Meal App to be posted. There are some things I'd like to see regarding MLLinearRegressor and how models can be personalized with context and data.
Post not yet marked as solved
We haven't released that demo as a full sample project, however, the code snippets showing exactly how the regressor was trained are included with the video in the Developer app.
Hi, I have two questions regarding the ActionAndVision sample application. After setting up the live AVSession how exactly and in which function is the sample buffer given to a visonhandler to perform a vision request. (for e.g. the getLastThrowType request) When and how is the captureOutput(...) func in the CameraViewController called? (line 268 ff) I appreciate any help, thank you very much.
Post not yet marked as solved
[bold]( [Testx ](https://developer.apple.com/forums/content/attachment/ac12ee3b-4fb7-419d-899a-713ce85b7a75){: .log-attachment} https://www.example.com/)
Post not yet marked as solved
I found the code on Apple Developer app. Thank you!
Post not yet marked as solved
I am trying to follow Frank's demo on analyzing punch card using contour detection and am stuck with contoursObservation.normalizedPath. I'm trying to draw the detected path on top of my UIImage. Any sample code or reference on converting normalised path to UIImage's context would help me greatly.
Post not yet marked as solved
You can actually find the mentioned kernel in the Code section for this video, but only in the Apple Developer App, not on the web. If you navigate to this video in the Developer App, click the Code tab, and scroll down to the 23:05 mark, you will see the kernel!
Post not yet marked as solved
In the WWDC Explore Computer Vision APIs presentation it was said (at 23:30) that we will make the Core Image Kernel for the optical flow available in the slide attachments. Does anyone know where to find it?
I have already found the answer. The method is called periodically and basically automatically whenever a new video frame is received via the delegate.
Post not yet marked as solved
Demo Classifying Images with Vision and Core ML is crashing: The demo app Classifying Images with Vision and Core ML - https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml has gone outdated, creating conversion effort to the newest swift version (with the convertor working incorrectly), but furthermore, the demo app crashes, raising this exception (right at the AppDelegate declaration): Thread 1: Your application has presented a UIAlertController () of style UIAlertControllerStyleActionSheet from Vision_ML_Example.ImageClassificationViewController (). The modalPresentationStyle of a UIAlertController with this style is UIModalPresentationPopover. You must provide location information for this popover through the alert controller's popoverPresentationController. You must provide either a sourceView and sourceRect or a barButtonItem. If this information is not known when you present the alert controller, you may provide it in the UIPopoverPresentationControllerDelegate met