What's new in Vision

RSS for tag

Discuss the WWDC22 Session What's new in Vision

Posts under wwdc2022-10024 tag

3 Posts
Sort by:
Post not yet marked as solved
0 Replies
100 Views
I’m currently working on implementing optical flow in Swift. There is sample code available from WWDC20 (with brief mentions of an update during WWDC22), including the actual CIKernel filter, code to apply the filter, and Vision code to instantiate an optical flow request. I’ve been having trouble and was wondering if anyone has been able to successfully use the sample code in a project. Does anyone have any suggestions or resources?
Posted Last updated
.
Post not yet marked as solved
0 Replies
135 Views
Ive noticed that one of the key features of iOS/iPad 16 and macOS Ventura, is the ability to copy the subject of an image, that could be a person, or a pet or even an object. It is a great feature which is already proving to be very useful to me. I wanted to check if it was possible to call that feature from code somehow, so I went to look for the APIs for it but I couldn't find that exactly. I found the API for VNGeneratePersonSegmentationRequest. And promptly wrote a sample app with it https://developer.apple.com/documentation/vision/vngeneratepersonsegmentationrequest It works great, however this api works only with human subjects, I tried various pictures that work with the "Copy Subject" feature and they just don't work the same way with VNGeneratePersonSegmentationRequest, and doesn't seem to tap the same Apple Silicon based acceleration that the Copy Subject function has. My question therefore is, is there APIs for this feature? or is it strictly a User facing function of macOS Ventura and iOS/iPadOS 16 feature.
Posted Last updated
.