Swift Student Challenge Vision

Hi Developers,

I want to create a Vision app on Swift Playgrounds on iPad. However, Vision does not properly function on Swift Playgrounds on iPad or Xcode Playgrounds. The Vision code only works on a normal Xcode Project.

SO can I submit my Swift Student Challenge 2024 Application as a normal Xcode Project rather than Xcode Playgrounds or Swift Playgrounds File.

Thanks :)

Replies

I believe you are only allowed to submit App Playgrounds (.swiftpm) files as it is described on the eligibility page. That was the case for last year's challenge as well. Make sure you submit an App Playground, not an Xcode Playground. Here is a thread that explains how to properly choose the right thing in Xcode or Swift Playgrounds.

I assume you're referring to the Vision Framework, not visionOS. I would try to find out why your app doesn't work as expected in Playgrounds, as I believe it may possible to use this framework as well, perhaps with a few small changes. What is the issue with your Vision code?

Make sure to check the full terms and conditions on February 5th.

Hi thanks for replying:

Here is my code:

Content View File:

import SwiftUI import Vision

struct ContentView: View { @State private var showImagePicker: Bool = false @State private var inputImage: UIImage? @State private var classificationLabel: String = "Upload Your Photo and add it your Daily Food Logger"

var body: some View {
    VStack {
        Text(classificationLabel)
            .padding()

        if let inputImage = inputImage {
            Image(uiImage: inputImage)
                .resizable()
                .scaledToFit()
        }

        Button("Upload Photos") {
            self.showImagePicker = true
        }
    }
    .sheet(isPresented: $showImagePicker) {
        ImagePicker(image: self.$inputImage)
    }
}

}

#Preview { ContentView() }

Image PIckes File:

import SwiftUI import UIKit

struct ImagePicker: UIViewControllerRepresentable { @Environment(.presentationMode) var presentationMode @Binding var image: UIImage?

func makeUIViewController(context: Context) -> UIImagePickerController {
    let picker = UIImagePickerController()
    picker.delegate = context.coordinator
    return picker
}

func updateUIViewController(_ uiViewController: UIImagePickerController, context: Context) {}

func makeCoordinator() -> Coordinator {
    Coordinator(self)
}

class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
    var parent: ImagePicker

    init(_ parent: ImagePicker) {
        self.parent = parent
    }

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
        if let uiImage = info[.originalImage] as? UIImage {
            parent.image = uiImage
        }

        parent.presentationMode.wrappedValue.dismiss()
    }
}

}

It seems that your code doesn't use any Vision Framework features, at least not yet. Removing the Vision import doesn't change anything to your current code. Perhaps you want to add some Vision-related features later?

I tried to run your code, and I only had to make some small changes in your ImagePicker since PresentationMode was deprecated:

struct ImagePicker: UIViewControllerRepresentable { 
    @Environment(\.dismiss) private var dismiss
    @Binding var image: UIImage?
    func makeUIViewController(context: Context) -> UIImagePickerController {
        let picker = UIImagePickerController()
        picker.delegate = context.coordinator
        return picker
    }

    func updateUIViewController(_ uiViewController: UIImagePickerController, context: Context) {}

    func makeCoordinator() -> Coordinator {
        Coordinator(self)
    }

    class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
        var parent: ImagePicker

        init(_ parent: ImagePicker) {
            self.parent = parent
        }

        func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
            if let uiImage = info[.originalImage] as? UIImage {
                parent.image = uiImage
            }

            parent.dismiss()
        }
    }
}

Everything seemed to work as expected after that. I was able to pick an image and it was displayed in the ContentView.

Also, if you want a SwiftUI Photo Picker implementation (i.e. doesn't rely on UIKit / UIViewControllerRepresentable), you can find out how to do it in this WWDC22 Session video and sample code. But your ImagePicker seems to work fine too.

Hi!

Thanks for the reply

I want it to process through a Create ML file and show the name of the picture. For example, the user would upload a photo of a dish (e.g. Pad Thai) and then the Core ML would tell me what that the food is, in text.

The Core ML works fine on a normal Xcode Project but Swift Playgrounds does not recognise the food and just does not identify the food.

Thanks :)

I believe the issue could be that the App is unable to find your ML model.

Check out this response to another thread and this one as well. You need to make some changes in how the App Playground handles resources.

You will also need your own Swift model class file, since in App Playgrounds these are not automatically generated. You can copy the one generated by your Xcode project.

I also previously had a problem using the Vision Framework in Playground.(Doesn't work well in preview or simulator)

I think it will work if you test it on an actual device.

  • Hi I am testing on a physical playground on my iPad :)

    Do you know how to fix it?

    Thanks

Add a Comment

I am testing on a physical playground on my iPad :) Do you know how to fix it?

Could you describe the error you are encountering so that we can help? Does the app throw a specific error / show a warning in Console?