Issues with using ClassifyImageRequest() on an Xcode simulator

Hello,

I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode:

VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")

It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge?

Thanks

Answered by DTS Engineer in 824406022

You'll need to check how projects for the challenge are being evaluated. The last I'd heard evaluation was done on Simulator and you'll want to verify that.

If this is working as expected on actual device then I suspect

\"Failed to create espresso
context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")

Refers to missing hardware support i.e.

"When running apps in Simulator, some hardware-specific features might not be available. Frameworks that provide access to device-specific features also provide API to tell you when those features are available. Call those APIs and handle the case where a feature isn’t available. To test the feature itself, run your code on a real device."

Note: "provide API to tell you when those features are available" i.e. be sure to query for image classification support on Simulator.

See Running your app in Simulator or on a device.

+1

Wondering if the Swift Student Challenge judges will be able to use it. Do they run the projects on a physical device or in the simulator?

You'll need to check how projects for the challenge are being evaluated. The last I'd heard evaluation was done on Simulator and you'll want to verify that.

If this is working as expected on actual device then I suspect

\"Failed to create espresso
context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")

Refers to missing hardware support i.e.

"When running apps in Simulator, some hardware-specific features might not be available. Frameworks that provide access to device-specific features also provide API to tell you when those features are available. Call those APIs and handle the case where a feature isn’t available. To test the feature itself, run your code on a real device."

Note: "provide API to tell you when those features are available" i.e. be sure to query for image classification support on Simulator.

See Running your app in Simulator or on a device.

Wanted to confirm:

Requirements: Your submission must be an app playground (.swiftpm) in a ZIP file.

See Eligibility for details.

I switched to VNClassifyImageRequest as a legacy api to see if there would be an error, but instead now any image sent in a simulator returns some completely wrong result that only happens in a simulator.

If I need to call for vision or classifing objects in swift playgrounds, how would you even do this in a simulator? CreateML doesn't create classes properally in swift playgrounds, i've tried this before. ClassifyImageRequest throws an error, and VNClassifyImageRequest just gives completely incorrect results.

Should I detect if its being ran in a simulator and have a pre-set image and observation data? This would limit functionality in my app greatly.

@cheesenuggit What is your plan? I'm not sure how to approach this.

Issues with using ClassifyImageRequest() on an Xcode simulator
 
 
Q