Does VisionKit produce slightly different results when calling from macOS app and iOS app for the same image

I'm using VisionKit framework for text recognition and detecting rectangles in the image. For that, I'm using VNRecognizeText & VNDetectRectangles features of the VisionKit. In macOS and iOS results, I found slight difference in the boundingBox coordinates of the text and the rectangles detected for the same image. Is this expected? Can we do anything to make the results identical? Also, on macOS, when I'm using same features of VisionKit from python (using pyobjc-framework-Vision package), there also i'm getting slightly different results.

Replies

Sorry, this isn't an answer, but just wanted to clarify that VisionKit is a different framework than Vision.