I am trying to develop an app that classifies an image taken from camera or chosen from image library using a model trained using Apple's Core ML. The model is properly trained and tested. It showed no problem when I tested it using Preview after it had been added to the xcode project. But when I tried to get the prediction using Swift, the results were wrong and completely different from what Preview showed. It felt like the model was untrained.
This is my code to access the prediction made by the model:
let pixelImage = buffer(from: (image ?? UIImage(named: "imagePlaceholder"))!)
self.imageView.image = image
guard let result = try? imageClassifier!.prediction(image: pixelImage!) else {
fatalError("unexpected error happened")
}
let className: String = result.classLabel
let confidence: Double = result.classLabelProbs[result.classLabel] ?? 1.0
classifier.text = "\(className)\nWith Confidence:\n\(confidence)"
print("the classification result is: \(className)\nthe confidence is: \(confidence)")
imageClassifier is the model I have created using this line of code before the code segment:
let imageClassifier = try? myImageClassifier(configuration: MLModelConfiguration())
myImageClassifier is the name of the ML model I created using Core ML.
The image is correct but it shows a different result other than Preview even if I input the same image. It had to be converted to type UIImage to CVPixelBuffer since prediction only allows the input of type CVPixelBuffer. pixelImage in the code segment above is the image after it had been changed to type CVPixelBuffer.
I don't know which part caused this error. I downloaded the sample project from this tutorial and the code executed without error and the results are correct when I instead implemented MobileNet, its Core ML model. Is there something wrong with the code or something wrong with the Core ML model I created? Any form of help would be appreciated.