[Create ML Components] The transformer is not representable as a CoreML model (ImageReader).

Following this guide https://developer.apple.com/documentation/CreateML/creating-a-multi-label-image-classifier

Has anyone been able to export a CoreML model, specifically according to the documentation below? Since there isn't any runnable examples, just snippets, perhaps documentation error? If anyone is already familiar with these pipeline generics, is there something that jumps out about the example transformer used that fails conformance or just factually incorrect?

Export the model to use with Vision

After you train the model, you can export it as a Core ML model.

// Export to Core ML
let modelURL = URL(filePath: "/path/to/model")
try model.export(to: modelURL)

Somehow got it to export by composing the transformer explicitly, which gave a ComposedTransformer<ImageFeaturePrint, FullyConnectedNetworkMultiLabelClassifierModel>, so I'm guessing the chaining used in the documentation was never attempted to export since return type is invalid

let annotatedFeatures = detectionFiles.map {
    AnnotatedFeature(
        feature: directoryURL.appending(component: $0.filename),
        annotation: $0.labels
    )
}

let reader = ImageReader()
let (training, validation) = annotatedFeatures.randomSplit(by: 0.8)

let featurePrint = ImageFeaturePrint(revision: 2)
let classifier = FullyConnectedNetworkMultiLabelClassifier<Float, String>(labels: labels)
let task = featurePrint.appending(classifier)

Task {
    let trainingImages = try await reader.applied(to: training)
    let validationImages = try await reader.applied(to: validation)
    
    let model = try await task.fitted(
        to: trainingImages,
        validateOn: validationImages,
        eventHandler: { event in
            debugPrint(event)
        }
    )
    try! model.export(to: modelFile)
}

Thanks for pointing this out!

[Create ML Components] The transformer is not representable as a CoreML model (ImageReader).
 
 
Q