I switched to VNClassifyImageRequest as a legacy api to see if there would be an error, but instead now any image sent in a simulator returns some completely wrong result that only happens in a simulator.
If I need to call for vision or classifing objects in swift playgrounds, how would you even do this in a simulator? CreateML doesn't create classes properally in swift playgrounds, i've tried this before. ClassifyImageRequest throws an error, and VNClassifyImageRequest just gives completely incorrect results.
Should I detect if its being ran in a simulator and have a pre-set image and observation data? This would limit functionality in my app greatly.