Apply a CoreImage filter & CoreML Object Detection model to a video?

Hi,

I am wondering if you are able to apply a CoreImage filter to a video capture session and then apply a CoreML Object Detection model on that filtered video?

Accepted Reply

Hello,

Yes you can do that. Most likely you would also want to have trained your ml model on video frames that have that same filter applied. Is there a particular part of this process where you are unsure about which api you can use?

  • So I have pre trained a custom ml model to detect objects in a certain homographic view. And so I have implemented that ml model into swift reading the pixels from the pixelbuffer, simailar to this example developer.apple.com/documentation/vision/recognizing_objects_in_live_capture. But before I recognise an object from the AVCapture I want to use a CIPerspectiveCorrection CIFilter on the CVPixelBuffer to create this "Homographic View" and then detect the objects from this view.

Add a Comment

Replies

Hello,

Yes you can do that. Most likely you would also want to have trained your ml model on video frames that have that same filter applied. Is there a particular part of this process where you are unsure about which api you can use?

  • So I have pre trained a custom ml model to detect objects in a certain homographic view. And so I have implemented that ml model into swift reading the pixels from the pixelbuffer, simailar to this example developer.apple.com/documentation/vision/recognizing_objects_in_live_capture. But before I recognise an object from the AVCapture I want to use a CIPerspectiveCorrection CIFilter on the CVPixelBuffer to create this "Homographic View" and then detect the objects from this view.

Add a Comment

Thanks for your reply,

So I have pre trained a custom ml model to detect objects in a certain homographic view. And so I have implemented that ml model into swift reading the pixels from the pixelbuffer, simailar to this example developer.apple.com/documentation/vision/recognizing_objects_in_live_capture. But before I recognise an object from the AVCapture I want to use a CIPerspectiveCorrection CIFilter on the CVPixelBuffer to create this "Homographic View" and then detect the objects from this view.

So I was looking for some guidance on how to implement this...so, if I should be implementing the CoreImage filters in the captureOuput function of the detection, which currently hasn't worked, or i should be doing this separately, maybe using the metal kit?

Hello,

So I was looking for some guidance on how to implement this...so, if I should be implementing the CoreImage filters in the captureOuput function of the detection, which currently hasn't worked, or i should be doing this separately, maybe using the metal kit?

Could you elaborate on what isn't working exactly? Is the filter not being applied properly? Is the ml model failing to detect objects?