Help Needed: SwiftUI View with Camera Integration and Core ML Object Recognition

Hi everyone,

I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve:

  1. Open the device's camera from a SwiftUI view.

  2. Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model.

  3. If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app.

I'm looking for guidance on:

  • Setting up live camera capture in SwiftUI.
  • Using Core ML and Vision frameworks for real-time object recognition in this context.
  • Managing navigation between views when the recognition condition is met.

Any advice, code snippets, or examples would be greatly appreciated!

Thanks in advance!

Answered by DTS Engineer in 819869022

Hello @VladimirBarev,

See AVCam for guidance on setting up live camera capture in SwiftUI.

See Recognizing Objects in Live Capture, which uses Core ML and Vision for real-time object recognition.

Managing navigation between views when the recognition condition is met.

In SwiftUI, navigation is state based. For example, see Manage navigation state for NavigationStack. When you get a successful recognition, you can update your navigation state accordingly.

-- Greg

Hello @VladimirBarev,

See AVCam for guidance on setting up live camera capture in SwiftUI.

See Recognizing Objects in Live Capture, which uses Core ML and Vision for real-time object recognition.

Managing navigation between views when the recognition condition is met.

In SwiftUI, navigation is state based. For example, see Manage navigation state for NavigationStack. When you get a successful recognition, you can update your navigation state accordingly.

-- Greg

Help Needed: SwiftUI View with Camera Integration and Core ML Object Recognition
 
 
Q