ML & Vision

Put machine learning to work in your app.

Sessions

  • Capture machine-readable codes and text with VisionKit

    8:00 a.m.

    Meet the Data Scanner in VisionKit: This framework combines AVCapture and Vision to enable live capture of machine-readable codes and text through a simple Swift API. We’ll show you how to control the types of content your app can capture by specifying barcode symbologies and language selection. We’ll also explore how you can enable guidance in your app, customize item highlighting or regions of interest, and handle interactions after your app detects an item. For more on interacting with Live Text through still images or paused video frames, watch "Add Live Text interaction to your app" from WWDC22.

    Watch

  • Get to know Create ML Components

    8:00 a.m.

    Create ML makes it easy to build custom machine learning models for image classification, object detection, sound classification, hand pose classification, action classification, tabular data regression, and more. And with the Create ML Components framework, you can further customize underlying tasks and improve your model. We’ll explore the feature extractors, transformers, and estimators that make up these tasks, and show you how you can combine them with other components and pre-processing steps to build custom tasks for concepts like image regression. For more information on creating complex customizable tasks, we recommend watching "Compose advanced models with Create ML Components" from WWDC22.

    Watch

  • What's new in Create ML

    8:00 a.m.

    Discover the latest updates to Create ML. We’ll share improvements to Create ML’s evaluation tools that can help you understand how your custom models will perform on real-world data. Learn how you can check model performance on each type of image in your test data and identify problems within individual images to help you troubleshoot mistaken classifications, poorly labeled data, and other errors. We’ll also show you how to test your model with iPhone and iPad in live preview using Continuity Camera, and share how you can take Action Classification even further with the new Repetition Counting capabilities of the Create ML Components framework. To learn more about all that Create ML can bring to your app, watch "Classify hand poses and actions with Create ML" and "Build dynamic iOS apps with the Create ML framework" from WWDC21.

    Watch

Labs

  • Machine Learning & Computer Vision lab

    Tuesday @ 9:00 - 12:00 p.m.

    Request an appointment with an Apple engineer for guidance and conversation about Core ML, Create ML, Vision, VisionKit, Natural Language, machine learning model conversion, model optimization, and more.

Digital Lounges

  • Q&A: Create ML

    Tuesday @ 1:00 - 2:00 p.m.

    Ask Apple engineers about Create ML during this 2 hour text-based Q&A. Stop in to request guidance on a code-level question, ask for clarifications, or learn from others in a group setting.

  • Meet the Presenter: Get to know Create ML Components

    Tuesday @ 3:00 - 4:00 p.m.

    Meet the presenters of “Get to know Create ML Components” and join a text-based watch party for the session, followed by a short Q&A. The watch party begins 5 minutes after the start of this activity — so don’t be late!

Sessions

  • Compose advanced models with Create ML Components

    8:00 a.m.

    Take your custom machine learning models to the next level with Create ML Components. We'll show you how to work with temporal data like video or audio and compose models that can count repetitive human actions or provide advanced sound classification. We'll also share best practices on using incremental fitting to speed up model training with new data. For an introduction to custom machine learning models, watch "Get to know Create ML Components" from WWDC22.

    Watch

  • Optimize your Core ML usage

    8:00 a.m.

    Learn how Core ML works with the CPU, GPU, and Neural Engine to power on-device, privacy-preserving machine learning experiences for your apps. We’ll explore the latest tools for understanding and maximizing the performance of your models. We’ll also show you how to generate reports to easily understand your model performance characteristics, help you gain insight into your models with the Core ML Instrument, and take you through API enhancements to further optimize Core ML integration in your apps. To get the most out of this session, be sure to watch “Tune your Core ML models” from WWDC21.

    Watch

Labs

  • Machine Learning & Computer Vision lab

    Wednesday @ 1:00 - 4:00 p.m.

    Request an appointment with an Apple engineer for guidance and conversation about Core ML, Create ML, Vision, VisionKit, Natural Language, machine learning model conversion, model optimization, and more.

Digital Lounges

  • Q&A: Core ML

    Wednesday @ 9:00 - 10:00 a.m.

    Ask Apple engineers about Core ML during this 2 hour text-based Q&A. Stop in to request guidance on a code-level question, ask for clarifications, or learn from others in a group setting.

  • Meet the Presenter: Compose advanced models with Create ML Components

    Wednesday @ 11:00 - 12:00 p.m.

    Meet the presenters of “Compose advanced models with Create ML Components” and join a text-based watch party for the session, followed by a short Q&A. The watch party begins 5 minutes after the start of this activity — so don’t be late!

Sessions

  • What's new in Vision

    8:00 a.m.

    Learn about the latest updates to Vision APIs that help your apps recognize text, detect faces and face landmarks, and implement optical flow. We’ll take you through the capabilities of optical flow for video-based apps, show you how to update your apps with revisions to the machine learning models that drive these APIs, and explore how you can visualize your Vision tasks with Quick Look Preview support in Xcode. To get the most out of this session, we recommend watching “Detect people, faces, and poses using Vision” from WWDC21.

    Watch

Labs

  • Machine Learning & Computer Vision lab

    Thursday @ 9:00 - 12:00 p.m.

    Request an appointment with an Apple engineer for guidance and conversation about Core ML, Create ML, Vision, VisionKit, Natural Language, machine learning model conversion, model optimization, and more.

  • Core Motion lab

    Thursday @ 10:00 - 12:00 p.m.

    Request an appointment with an Apple engineer for guidance and conversation about your app and the Core Motion APIs, best practices, and more.

Digital Lounges

  • Q&A: Vision

    Thursday @ 2:00 - 3:00 p.m.

    Ask Apple engineers about Vision during this one hour text-based Q&A. Stop in to request guidance on a code-level question, ask for clarifications, or learn from others in a group setting.

Sessions

  • Accelerate machine learning with Metal

    8:00 a.m.

    Discover how you can use Metal to accelerate your PyTorch model training on macOS. We'll take you through updates to TensorFlow training support, explore the latest features and operations of MPS Graph, and share best practices to help you achieve great performance for all your machine learning needs. For more on using Metal with machine learning, watch "Accelerate machine learning with Metal Performance Shaders Graph" from WWDC21.

    Watch

  • Explore the machine learning development experience

    8:00 a.m.

    Learn how to bring great machine learning (ML) based experiences to your app. We'll take you through model discovery, conversion, and training and provide tips and best practices for ML. We'll share considerations to take into account as you begin your ML journey, demonstrate techniques for evaluating model performance, and explore how you can tune models to achieve real-time performance on device. To learn more about the techniques covered in this session, watch "Optimize your Core ML usage" and "Accelerate machine learning with Metal" from WWDC22.

    Watch

Labs

  • Machine Learning & Computer Vision lab

    Friday @ 1:00 - 4:00 p.m.

    Request an appointment with an Apple engineer for guidance and conversation about Core ML, Create ML, Vision, VisionKit, Natural Language, machine learning model conversion, model optimization, and more.

Digital Lounges

  • Meet the Presenter: Accelerate machine learning with Metal

    Friday @ 9:00 - 10:00 a.m.

    Meet the presenters of “Accelerate machine learning with Metal” and join a text-based watch party for the session, followed by a short Q&A. The watch party begins 5 minutes after the start of this activity — so don’t be late!

  • Meet the Presenter: Create custom catalogs at scale with ShazamKit

    Friday @ 9:00 - 10:00 a.m.

    Meet the presenters of “Create custom catalogs at scale with ShazamKit” and join a text-based watch party for the session, followed by a short Q&A. The watch party begins 5 minutes after the start of this activity — so don’t be late!

  • Meet the Presenter: Explore the machine learning developer experience

    Friday @ 10:00 - 11:00 a.m.

    Meet the presenter of “Explore the machine learning developer experience” and join a text-based watch party for the session we explore finding, converting, and optimizing a custom ML model to bring intelligent features to apps. The watch party begins 5 minutes after the start of this activity — so don’t be late!