After exporting an action classifier from Create ML and importing it into Xcode, how do you use it do make predictions?

I followed Apple's guidance in their articles Creating an Action Classifier Model, Gathering Training Videos for an Action Classifier, and Building an Action Classifier Data Source. With this Core ML model file now imported in Xcode, how do use it to classify video frames?

For each video frame I call

do {
    let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
    try requestHandler.perform([self.detectHumanBodyPoseRequest])
} catch {
    print("Unable to perform the request: \(error.localizedDescription).")
}

But it's unclear to me how to use the results of the VNDetectHumanBodyPoseRequest which come back as the type [VNHumanBodyPoseObservation]?. How would I feed to the results into my custom classifier, which has an automatically generated model class TennisActionClassifier.swift? The classifier is for making predictions on the frame's body poses, labeling the actions as either playing a rally/point or not playing.

After exporting an action classifier from Create ML and importing it into Xcode, how do you use it do make predictions?
 
 
Q