Post not yet marked as solved
I’m currently working on implementing optical flow in Swift. There is sample code available from WWDC20 (with brief mentions of an update during WWDC22), including the actual CIKernel filter, code to apply the filter, and Vision code to instantiate an optical flow request. I’ve been having trouble and was wondering if anyone has been able to successfully use the sample code in a project. Does anyone have any suggestions or resources?
Post not yet marked as solved
Ive noticed that one of the key features of iOS/iPad 16 and macOS Ventura, is the ability to copy the subject of an image, that could be a person, or a pet or even an object.
It is a great feature which is already proving to be very useful to me.
I wanted to check if it was possible to call that feature from code somehow, so I went to look for the APIs for it but I couldn't find that exactly.
I found the API for VNGeneratePersonSegmentationRequest. And promptly wrote a sample app with it
https://developer.apple.com/documentation/vision/vngeneratepersonsegmentationrequest
It works great, however this api works only with human subjects, I tried various pictures that work with the "Copy Subject" feature and they just don't work the same way with VNGeneratePersonSegmentationRequest, and doesn't seem to tap the same Apple Silicon based acceleration that the Copy Subject function has.
My question therefore is, is there APIs for this feature? or is it strictly a User facing function of macOS Ventura and iOS/iPadOS 16 feature.
Post not yet marked as solved
Does VNDetectTrajectoriesRequest make use of VNGenerateOpticalFlowRequest and if so has it been updated to revision 2 in iOS 16?