the preview function of coreMLmodel

I want to know how the preview function is implemented. I have a mlmodel for object detection. I found that when I open the model in xcode, xcode provides a preview function. I put a photo into it and get the target prediction box. I would like to know how this visualization function is implemented. At present, I can only get the three data items of Label, Confidence, and BoundingBox in the playground, and the drawing of the prediction box still requires me to write code for processing.

import Vision

func performObjectDetection() {
    do {
        let model = try VNCoreMLModel(for: court().model)

        let request = VNCoreMLRequest(model: model) { (request, error) in
            if let error = error {
                print("Failed to perform request: \(error)")
                return
            }

            guard let results = request.results as? [VNRecognizedObjectObservation] else {
                print("No results found")
                return
            }

            for result in results {
                print("Label: \(result.labels.first?.identifier ?? "No label")")
                print("Confidence: \(result.labels.first?.confidence ?? 0.0)")
                print("BoundingBox: \(result.boundingBox)")
            }
        }

        guard let image = UIImage(named: "nbaPics.jpeg"), let ciImage = CIImage(image: image) else {
            print("Failed to load image")
            return
        }

        let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up, options: [:])
        try handler.perform([request])

    } catch {
        print("Failed to load model: \(error)")
    }
}

performObjectDetection()

These are my codes and results

Replies

Hello,

The implementation of features in Xcode are considered private. If you find that particular functionality useful then I suggest that you file an enhancement request for the API you would like to see using Feedback Assistant.