Using VisionKit, there's two main ways to get text from images.
Based on some OCR tests, I'm seeing that the outputs from these two methods are different. Initially, I thought ImageAnalyzer was running VNRequestTextRecognitionLevel.fast because it's for Live Text, but the outputs from ImageAnalyzer are sometimes better than VNRequestTextRecognitionLevel.accurate.
Is ImageAnalyzer running VNRequestTextRecognition in the background? Or if it isn't, what pipeline is it using to detect text?