Using montage video as training data for Action Classifier

I watched "Build an Action Classifier with Create ML" video from WWDC '20 and want to train a model using a montage video. There is a reference to using a JSON or CSV file with the timestamp in the video, but I can't find the details in any documentation. I tried putting the JSON file in the same directory as the montage video, but get a Training Data Error with description "URL of type 'JSON' must be a file of type CSV or JSON or a path to a folder containing a DataTable binary representation".

I'm not sure what to name the file, or where to place it, or if I'm missing something else entirely.
Use this "directoryWithVideosAndAnnotation" data source (as pointed in the above link). But note that, you need to use Create ML framework to do this in Swift. The CreateML App current does not support this yet.

directoryWithVideosAndAnnotation(at: URL,  ===> this is your directory URL for your videos
annotationFile: URL,  ===> this is the CSV or JSON, or TXT annotation file URL
videoColumn: String,  ===> this is the video file column name string you used in your annotation file.
labelColumn: String,  ===> this is the label name column name string your used in your annotation file.
startTimeColumn: String? = nil,  ===> this is start time column, optional, if you don't provide, it assumes starting from 0
endTimeColumn: String? = nil) ===> this is end time column, optional, if you don't provide, it assumes ending at the very end.



The error you got sounds like you are using the wrong data source type. Try this one: "directoryWithVideosAndAnnotation".
The JSON/CSV file can be in the same directory of videos, or in a separate directory, but both video directory URL and annotation file URL are needed on the Swift API.
I've been using the CreateML app to do this, as shown in the WWDC video referenced above, but it sounds like the support isn't there yet?
The Create ML app in the current Xcode 12 beta does not support montage input formats.

The Create ML framework can be used in the current beta if you wish to use this format.

Note that you can still preview how well the resulting Core ML model works with sample data using the model's preview tab in Xcode.
Using montage video as training data for Action Classifier
 
 
Q