How to create MLMultiArray from UIImage?

I am using mlmodel created from keras model that used (64,64,3) numpy array.

Xcode asks for MLMultiArray with dimensions (3,64,64), However I failed to find examples of creating MLMultiArray. I would greatly appriciate if you could describe how it can be done or gave an example.

I'm using CoreMLHelpers on github (can't link due to moderation). This translates a UIImage to a PixelBuffer for usage in my Keras Convolutional Network.


let pixelBuffer = myImage.pixelBuffer(width: 60, height: 90)
                    let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer!, options: [:])
                    do {
                             try handler.perform([self.classificationRequest])
                         }
                    catch {
                        print(error)
                    }

I can suggest another code for this, that translates UIImage (HWC shape (64,64,3)) to MLMultiArray (CHW shape (3,64,64)) and scale it to 0.0 - 1.0 range:


public func preprocess(image: UIImage, width: Int, height: Int) -> MLMultiArray? {
    let size = CGSize(width: width, height: height)


    guard let pixels = image.resize(to: size).pixelData()?.map({ (Double($0) / 255.0) }) else {
        return nil
    }

    guard let array = try? MLMultiArray(shape: [3, height, width] as [NSNumber], dataType: .double) else {
        return nil
    }

    let r = pixels.enumerated().filter { $0.offset % 4 == 0 }.map { $0.element }
    let g = pixels.enumerated().filter { $0.offset % 4 == 1 }.map { $0.element }
    let b = pixels.enumerated().filter { $0.offset % 4 == 2 }.map { $0.element }

    let combination = r + g + b
    for (index, element) in combination.enumerated() {
        array[index] = NSNumber(value: element)
    }

    return array
}

Hey, recently I've tried to use your codes, but an error occur, there's no UIImage.pixelData().


How can I fix this?

The correct thing to do is convert your model with the `image_input_names` option so that you can pass in an image instead of an MLMultiArray. (You can also change the input type from multi-array to image in the mlmodel afterwards.)


Edit: Because this question comes up a lot, I wrote a blog post about it:

h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/
How to create MLMultiArray from UIImage?
 
 
Q