Could not create inference context

While trying to learn about coreML, I ran into an issue. Using a model from Apple's website (MobileNetV2) I had no issues. When I tried to use my own model that I created, I ran into the issue and the localized description was "Could not create inference context" when using the iPhone simulator. After a quick search I tested this using the arm64 simulator and it worked just fine. I believe this is an m1 related bug because another forum said it worked without any issues on intel Mac, but not their m1.


        if let data = self.animal.imageData {

            do {

                let modelFile = try! DogorCatmodel1(configuration: MLModelConfiguration())

                let model = try VNCoreMLModel(for: modelFile.model)

                

                let handler = VNImageRequestHandler(data: data)

                

                let request = VNCoreMLRequest(model: model) { (request, error) in

                    guard let results = request.results as? [VNClassificationObservation] else {

                        print("Could not classify")

                        return

                    }

                    

                    for classification in results {

                        var identifier = classification.identifier

                        identifier = identifier.prefix(1).capitalized + identifier.dropFirst()

                        print(identifier)

                        print(classification.confidence)

                    }

                }

                

                do {

                    try handler.perform([request])

                }

                catch {

                    print(error.localizedDescription)

                    print("Invalid")

                }

            }

            catch {

                print(error.localizedDescription)

            }

        }

    }

Replies

Hello! Default model configuration (MLModelConfiguration()) tries to run the model on CPU, GPU and NeuralEngine. NeuralEngine is not supported in the Simulator, which could explain the issue. Could you please file a bug report with the model and some sample code on http://feedbackassistant.apple.com/

  • Thank you for the response! That is good insight, but I was also trying to use the default model configuration for MobileNetV2 model and had no issues in the simulator. The issue was only with the model I trained. My model would work using the arm64 simulator or when I installed on my physical iPhone, just not in the iOS simulator.

Add a Comment

I'm seeing this error for Apple’s own Vision/Image models in tvOS Simulator using Xcode 13.2.1 and tvOS 15.2 on an M1 Max.

Here is the code snippet that causes the error:

	let requestHandler = VNImageRequestHandler(cgImage: cgImage)

	let objectness = VNGenerateObjectnessBasedSaliencyImageRequest()

	let attention = VNGenerateAttentionBasedSaliencyImageRequest()

	let faces = VNDetectFaceRectanglesRequest()



	do {

		try requestHandler.perform([objectness, attention, faces])

	} catch {

		print("Error analyzing image: \(error)")

		return nil

	}

And as a workaround based on the above response from Apple, this works:

	let requestHandler = VNImageRequestHandler(cgImage: cgImage)

	let objectness = VNGenerateObjectnessBasedSaliencyImageRequest()

	let attention = VNGenerateAttentionBasedSaliencyImageRequest()

	let faces = VNDetectFaceRectanglesRequest()


	#if targetEnvironment(simulator)

	objectness.usesCPUOnly = true

	attention.usesCPUOnly = true

	faces.usesCPUOnly = true

	#endif

	do {

		try requestHandler.perform([objectness, attention, faces])

	} catch {

		print("Error analyzing image: \(error)")

		return nil

	}

Setting request.usesCPUOnly = true solved the issue for me on an M1 Max, Xcode 13.2.1