View in English

  • Global Nav Open Menu Global Nav Close Menu
  • Apple Developer
Search
Cancel
  • Apple Developer
  • News
  • Discover
  • Design
  • Develop
  • Distribute
  • Support
  • Account
Only search within “”

Quick Links

5 Quick Links

Videos

Open Menu Close Menu
  • Collections
  • Topics
  • All Videos
  • About
  • About
  • Transcript
  • Explore the biggest updates from WWDC25

    Dive into the key features announced at WWDC25 in this all-new session recorded live at the Apple Developer Center in Cupertino. This video is for all members of your team, including designers, developers, and product managers.

    Chapters

    • 0:03:10 - Agenda
    • 0:04:03 - The new design system
    • 0:19:16 - Build with the new design
    • 0:39:53 - Machine learning and Apple Intelligence
    • 1:09:03 - What's new in visionOS

    Resources

      • HD Video
      • SD Video

    Related Videos

    WWDC25

    • Build a SwiftUI app with the new design
    • Build a UIKit app with the new design
    • Build an AppKit app with the new design
    • Discover machine learning & AI frameworks on Apple platforms
    • Explore spatial accessory input on visionOS
    • Explore video experiences for visionOS
    • Get started with MLX for Apple silicon
    • Meet Liquid Glass
    • Meet SwiftUI spatial layout
    • Meet the Foundation Models framework
    • Say hello to the new look of app icons
    • What’s new in visionOS 26
    • What’s new in widgets
  • Search this video…

    Hello. Yeah, oh thank you thank you kicking off the day nice love it. Hi and welcome to the Apple Developer Center in Cupertino. My name’s Leah and I’m a Technology Evangelist at Apple. Today we're so excited to go over some of the biggest announcements from WWDC. And as you can imagine, there's a lot to cover.

    But first, I want to share more about where we are.

    The developer Center is a part of Apple Park and it provides a great venue for today's event with different areas for collaboration and connection.

    This room is Big sir. It's a state of the art space designed to support a range of activities including in-person presentations, studio recordings, and live broadcast.

    There are also labs, briefing rooms, and conference rooms that allow us to host activities like this and many more.

    This is one of 4 developer centers around the world where we host designers, where we host designers, developers for sessions, labs, and workshops. Now, has anyone previously been to a developer center? Nice, cool. Welcome back. And if this is your first time, we're so happy to have you.

    We also have many people joining us online right now, so hi, thanks for tuning in.

    Now I want to set the scene for today's presentations. This year’s WWDC was packed with exciting announcements, and we’re bringing developers together around the world, in 28 cities, to review some of the biggest updates.

    At WWDC, we released over 100 videos with the latest updates and best practices across platforms. As you can imagine, there's a lot to dig into.

    So today, over the next 2 hours, we're going to review the most important updates so you can gain a broad high-level understanding of what's new. And with this, we hope you're going to be inspired for your own ideas and decide what to learn next in a deeper way through the videos and documentation.

    My team’s really looking forward to kicking this off with you. So, before we get started, I have a few announcements for the people in the room. If you haven't figured it out already, it’s easy to stay connected throughout the day with the Apple WiFi network. And if you need to recharge your devices at any point, there's power located under every seat in front of the armrests.

    And for everyone tuning in today, these presentations are meant to be a special experience just for you, so please refrain from taking video or live streaming during the presentations. However, you are totally welcome to take photos throughout the event, and we'll also send follow up information once the event concludes so you won't miss a thing.

    With those small pieces out of the way, we have a full agenda recapping the biggest updates from WWDC including the new design system, Apple Intelligence and machine learning and visionOS.

    So finally, the moment you've been waiting for, let's check out the schedule. We have a lot of great presentations lined up.

    First, Majo and Curt will talk about the biggest updates from the new design system.

    Then Shashank will review the latest with machine learning and Apple Intelligence.

    After that, we're gonna take a quick break where you can grab some time to grab a coffee, chat with developers, or stretch. And then finally, Allan will share what's new with spatial computing for visionOS. And then after that, for those of you here with us in Cupertino, we're gonna have a mixer where you can enjoy some refreshments, chat with Apple engineers and designers. It'll be really fun.

    So with the schedule out of the way, please join me in welcoming Majo on stage to talk about the new design system.

    Hello everyone, thanks for coming and tuning in. I’m Majo and I’m a designer on the Evangelism team. I'm excited to share with you an overview of the new design system and it's incredible possibilities for your apps. Later my colleague Curt will take you through some details of its implementation.

    At WWDC25, we announced a significant new step and evolution of the look and feel of Apple software.

    This is a new harmonized design language that's more cohesive, adaptive, and expressive.

    From a completely reimagined look of app icons all the way to the controls structure and layout of apps.

    These updates deliver a unified design language for your apps across multiple platforms. You'll get these new appearances as soon as you recompile for the new SDK.

    Today, I’ll share an overview of Liquid Glass, updates to some key components in the system, and how to bring this new look and feel to your app icon.

    At the heart of these updates is a brand new adaptive material for controls and navigational elements called Liquid Glass.

    This new set of materials dynamically bends, shapes, and concentrates light in real time.

    By sculpting light like this, controls are almost transparent while still being distinct.

    Simultaneously, the material behaves and moves organically in a manner that feels more like a lightweight liquid responding fluidly to touch, making transitions between sections feel seamless as the controls continually shapeshift.

    When showing a menu, the button simply pops open with a transition that communicates a very clear and direct relationship between itself and its options.

    With a similar intention elements can even lift up into Liquid Glass as you interact with the control, allowing you to observe the value underneath it.

    There are 2 types of Liquid Glass, clear and regular.

    Clear Liquid Glass is permanently transparent, which allows the richness of the content underneath to come through and interact with the glass in beautiful ways.

    To provide enough legibility for symbols and labels, it needs a dimming layer to darken the underlying content.

    And then there's regular Liquid Glass. This is the one you’ll be using the most. It’s highly adaptable and provides legibility over different conditions.

    Regular Liquid Glass knows how bright or dark the background is, so the platter and the symbols on top of it flip between light and dark to stay visible. This behavior happens automatically when you use the regular style and works independently of light and dark mode.

    To bring Liquid Glass into your app while stay true to your unique brand, rely on common system components.

    Like tab bars, to navigate between the top level sections of your app.

    Toolbars, to group actions and facilitate navigation at a screen level. And sheets, to present simple content or actions in modal or non-modal views.

    I’ll start with tab bars. They provide an overview of your app at a glance, supporting navigation among the different sections, and since it's always visible, it's easy to access them at any time.

    They have been redesigned for Liquid Glass to float above the app content, guiding the navigation.

    They are translucent and can be configured to minimize and re-expand on scroll, allowing your content to shine through while keeping the sense of place.

    If you’re updating an existing tab bar there are a couple of new features you can take advantage of.

    Like the new search tab, it's a dedicated tab to keep search easier to reach and always available from every part of your app.

    The tab bar can also display an accessory view like the mini player in the Music app.

    The system displays it above the tab bar, matching its appearance. When the tab bar is minimized on scroll, the accessory view moves down in line showing more of your content.

    The accessory view stays visible across your app, so avoid using it for screen-specific actions. Mixing elements from different parts of the app can blur hierarchy and make it harder to distinguish what's persistent from what's contextual.

    Instead keep them in the content area with the content it supports. Curt will explain how to bring all these updates to the tab bar later on.

    Next, toolbars. They offer convenient access to frequently used controls arranged along the top or bottom of the screen. They also facilitate access to navigation and help people feel oriented, all while Liquid Glass ensures that your content remains the focal point.

    To keep the actions in the toolbar understandable at a glance, opt for simple recognizable symbols over text. In the new design, related controls using symbols share the glass background.

    Just be sure not to use symbols for actions like edit or select which are not easily represented by symbols.

    In the case your toolbar has controls with text and symbols, it's recommended that they stay separate to avoid confusion or unintended associations.

    If the toolbar looks crowded in your app, then it's a great opportunity to simplify, prioritize, and group actions.

    For those that are secondary, use a More menu, which is represented by an ellipsis symbol.

    These adjustments in the toolbar make the interface more predictable and actionable.

    The introduction of Liquid Glass called for a new type of treatment called Scroll Edge Effect. In Mail on iOS, the content scrolls under the Liquid Glass toolbar. The Scroll Edge Effect leaves the buttons above the content to remain clear and visible. Remember to include it in your app wherever there are floating Liquid Glass controls.

    However, if your app has multiple actions in the toolbar, it's better to use a hard style. It creates a more distinct separation with the content area, increasing legibility. It's more frequently used on iPadOS and macOS apps.

    Color can enhance communication and evoke your brand. however, too much of it or in the wrong places can be perceived as visual noise. So here's a quick look at where and how you can effectively tint Liquid Glass.

    Start by using color sparingly and consistently. Reserve it for key actions that truly benefit from emphasis and refrain from adding color solely to evoke your brand.

    For example, a common concern is how to manage tinted backgrounds. Tinting every glass button in the toolbar can be distracting, pulling away attention from the content.

    Instead, make your brand color shine where it matters: the content area. This approach allows your app to continue expressing its unique personality without bringing attention to the toolbar.

    Lastly, when tinting Liquid Glass, use the built-in tinting. It generates a range of tones that change hue, brightness and saturation depending on what’s behind it, without deviating too much from the intended color.

    Additionally, it adapts for accessibility features like reduced transparency. It makes Liquid Glass frostier and makes the content... and obscures more of the content behind it. And increased contrast, it highlights elements adding a contrasting border.

    I presented several ways Liquid Glass is changing how we interact and navigate with apps. Another key piece of that is sheets. These are used for requesting specific information or presenting a simple task. And they have been also thoughtfully redesigned for Liquid Glass.

    A resizeable sheet expands based on detents which are specific heights where the sheet naturally rests. In their smallest and medium detents, they use a new inset with a floating glass appearance helping people retain their original context. And when in full height, sheets expand and the glass material recedes into a more opaque material providing a focused experience matching people deeper level of engagement. Curt will elaborate on how to bring this behavior into your apps.

    From a usability perspective, remember while working with sheets to show the grammar only when it supports multiple detents because it indicates that it can be resized. And use an explicit button to indicate how to close it even if it supports the dismissed gesture.

    These components have been completely redesigned for the new design system, Liquid Glass, and to sit in a concentric way with the corner of Apple devices, conveying a subtle relationship between hardware and software.

    So it's important that your design system and visual design harmonize with Liquid Glass too.

    For example, if your design system, there are any corners that just feel off its shape probably needs to be concentric. This is especially common in nested containers like artwork within a card. This can create tension or break the sense of balance. The beauty of the system is that it automatically calculates the inner radius relative to the containers and to the device, completing a seamless integration. Curt will cover more in detail later.

    Lastly, the sense of harmony, dimension and fluidity of Liquid Glass is also incorporated into app icons across Mac, iPad, iPhone, watch, and CarPlay.

    System icons have been reimagined to make the most out of the new language and its new dynamic features.

    These are made of stacked layers of a Liquid Glass material specifically for app icons creating, a truly dimensional design.

    The material automatically adapts to all appearance modes, such as light mode and dark mode. And there’s a new range of appearance modes using Liquid Glass. A monochrome glass that comes in a light version, and dark too. And lastly, there are new tint modes. A dark tint that adds color to the foreground, and light tint, where the color is infused into the background.

    To create these expressive and dimensional app icons across different appearances and platforms, use this new tool called Icon Composer. It's designed to fit into your existing workflow and pair with your current design tools. It gives you the ability to preview and play with all these new materials and effects.

    And that’s how to get started with the new design system and Liquid Glass. Everything I covered today comes built in allowing you to make the experience in your apps feel more organic, immersive, and fluid. I'm excited to invite Curt to the stage to show you how simple it is to implement and make your app truly shine on the new design system. Thank you and now here's Curt.

    Thanks Majo.

    Hi, I’m Curt. I’m a Technology Evangelist on the Developer Relations team here at Apple, and I'm excited to share how you can bring the new design to your apps. I'll reinforce the things Majo discussed and demonstrate how to make the new design come to life in your code. As my main example, I'll use Landmarks, a sample project available on the Apple Developer website. Here it is on Mac, iPad, and iPhone. When you compile your app with the Xcode 26 SDKs, you'll already notice changes throughout the UI. Many of these enhancements are automatically available. I'll describe those and tell you about some new APIs in iOS 26 and macOS Tahoe that let you customize the experience even further. All my examples will use SwiftUI, but there are parallel APIs in UIKit and AppKit as well. I'll begin with updates to structural app components like tabs and sidebars. Then I'll cover the new look and behavior of toolbars and share updates to the search experience.

    After that, I'll talk about how controls come to life with Liquid Glass. I'll wrap up the new design by describing how to adopt Liquid Glass in your own custom UI elements. Then I'll finish by highlighting some important changes in iPadOS 26.

    I'll start with apps structure. App structure refers to the ways that people navigate your app, things like tabs, sidebars and sheets. Every one of these is refined for the new design. Sidebars allow navigating through a hierarchy of possibly many route categories. On both macOS Tahoe and iPadOS 26, sidebars are now Liquid Glass that floats above the content. The behavior is the same on both platforms, but I'll drill in on the Mac. In the Landmarks app, the pink blossoms and the sand in the content area reflect against the sidebar. But the hero banner here ends abruptly against the edge of the sidebar. Allowing the content to be positioned beneath the sidebar instead would let the liquid glass take on even more of the banner's beautiful colors.

    To do this, just add the new background extension effect modifier. This extends the view outside the safe area without clipping its content. I'll hide the sidebar for a moment to reveal what's happening behind it. The image is actually mirrored and blurred, extending the tone of the image beneath the sidebar while leaving all of its content visible.

    Tab views provide persistent top-level navigation and they're best suited for a small number of key sections, typically 5 or fewer.

    With the new design, the tab bar on iPhone floats above the content, and you can configure it to minimize on scroll.

    To do this, just add the tabBarMinimizeBehavior modifier to your existing TabView. With the onScrollDown argument, the tab bar minimizes when scrolling down and re-expands when scrolling up.

    And like Majo said, you can now add views above the tab bar. Use this for views that you always want close at hand, like the playback view in Music.

    Place a view above the tab bar with the tabViewBottomAccessory modifier. This also takes advantage of the space left by the tab bar's collapsing behavior.

    To respond to this collapsing, just read the tabViewBottomAccessoryPlacement from the environment and adjust the content of your accessory when it collapses into the tab bar area.

    Like tabs and sidebars, the new design automatically enhances sheets. In the Landmarks app, creating a new collection presents a sheet of Landmark options. On iOS 26, partial height sheets are inset with a Liquid Glass background by default. At smaller heights, the bottom edges pull in nesting in the curved edges of the display. When transitioning to a full height sheet, the Liquid Glass background gradually transitions, becoming opaque and anchoring to the edges of the screen.

    Sheets directly morph out of the buttons that present them. Here's the existing code for presenting a sheet in the Landmarks app. There's a toolbar button to present the sheet and a sheet modifier defining the sheet's contents. I make the contents morph out of the source view in just 3 steps. Step one, I introduce a Namespace to associate the sheet and the button. Step 2, I mark the button as the source of the transition. And step 3, I mark the sheet's content as having a zoom transition, and here's the result.

    Other presentations like pop overs, alerts, and menus flow smoothly out of Liquid Glass controls too, drawing focus from their action to the content they present. You get this behavior automatically. In the new design, dialogues also automatically morph out of the buttons that present them. Just attach the confirmationDialog modifier to the source button.

    Next up, toolbars.

    In the new design, toolbar items are placed on a Liquid Glass surface that floats above your app. The items adapt to the content beneath and are automatically grouped.

    When I compiled the Landmarks app with Xcode 26, my custom toolbar items are grouped separately from the system provided back button.

    In Landmarks, the “Favorite” and “Add to collection” buttons are related actions. So I use the new ToolbarSpacer API with fixed spacings to split them into their own group. This provides visual clarity that the group actions are related while the separate actions like Share Link and Inspector Toggle have distinct behavior.

    ToolbarSpacer can also create a flexible space that expands between toolbar items. The Mail app uses this technique to leading align the filter item and trailing line the search and compose items.

    These toolbar layout enhancements are available on Mac, iPad, and iPhone. The new design introduces a few other changes to toolbars too. The new design uses a monochrome palette in more places, including in toolbars. This helps reduce visual noise and emphasizes your app's content and aids in legibility.

    Tint icons with the tint modifier or a prominent button style but again use color sparingly as Majo said tint buttons to convey meaning like a call to action or a next step and not just for visual effect or branding.

    In the new design, an automatic Scroll Edge Effect keeps controls legible. It's a subtle blur and fade effect applied to content as it scrolls beneath bars.

    If your app has any extra backgrounds or darkening effects behind bar items, make sure to remove those as they'll interfere with this effect.

    As Majo shared, for denser UIs with a lot of floating elements like the Calendar app here, tune the sharpness of the effect with the scrollEdgeEffectStyle modifier, passing the hard edge style.

    All right, that was toolbars. Now I'll move on to search. The new design embraces two main search patterns. The first is search in the toolbar. On iPhone, this places the field at the bottom of the screen where it’s easy to reach, and on iPad and Mac in the top trailing position of the toolbar. The second pattern is to treat search as a dedicated tab in a multi-tab app. I'll describe how to build both patterns. For the Landmarks app, I placed search in the toolbar. Choose this placement in your app when people can get to all or at least most of your content by searching.

    The search field appears on its own Liquid Glass surface. Tapping it causes it to receive focus and shows the keyboard.

    To get this variant in Landmarks, I just added the searchable modifier on the existing NavigationSplitView.

    Declaring the modifier here indicates that search applies to the entire view, not just one of its columns.

    On iPhone, this variant automatically adapts to bring the search field to the bottom of the display.

    Depending on device size and other factors, the system may choose to minimize the field into a button like the one here in Mail.

    When I tap the button, a full-width search field appears above the keyboard. To explicitly opt in to the minimized behavior, say because search isn't a main part of your app's experience, use the new searchToolbarBehavior modifier.

    Searching in tabbed apps often begins in a dedicated search tab.

    This pattern is used in apps across all of Apple’s platforms like the Health app as Majo mentioned.

    To do this in your app, add a tab with the search role and place the searchable modifier on your TabView. Now, when someone selects this tab, a search field takes its place. And the content of the search tab appears. People can interact with your browsing suggestions here or tap the search field to bring up the keyboard and search for specific terms.

    On Mac and iPad, when someone selects the search tab, the search field appears centered above your app's browsing suggestions.

    These search patterns give you greater flexibility and control over the search experience in your app.

    The new design refreshes the look and feel across platforms for controls like buttons, sliders, menus and more. Besides the Liquid Glass effect, the new design makes controls more similar across all of Apple's platforms, providing a familiar experience as people move between their devices.

    Bordered buttons now have a capsule shape by default continuing the curved corners of the new design. Now there's one exception here, many small and medium sized controls on macOS retain a rounded rectangle shape which preserves horizontal density.

    Use the button border shape modifier to adjust the shape for any control size.

    The new design makes control sizes more consistent across platforms too.

    Most controls on macOS are slightly taller now, providing a little more breathing room around the control’s label and enhancing the size of the click target. If you have an existing high density layout, say a complex inspector, use the controlSize modifier to request a smaller size.

    In addition to updated sizes and shapes that standardize controls across Apple's platforms, the new design brings Liquid Glass to any button in your app with the new glass and glassProminent button styles.

    The new design also brings consistency to sliders across macOS, iOS, and iPadOS. On all these platforms, sliders now support discrete values with tick marks. And you can now set an arbitrary starting point. This is useful for values that people can adjust up or down from a central default. Think about selecting a faster or slower speed in your favorite podcast app.

    Menus on both iOS and macOS also have a new design, and new to macOS icons now appear on the leading edge of menus. Like always, use a label or other standard control initializer when creating a menu. Now the same API creates the same result on iOS and macOS.

    In addition to updates to controls, there are new APIs to update your controls for the new design.

    Many system controls have their corners aligned within their containers, whether that's the root window on macOS or the bezel of your iPhone.

    As Majo said, this is called corner concentricity. I love that animation.

    To build views that automatically maintain concentricity with their container, use the concentric rectangle shape. Pass the containerConcentric configuration to the corner parameter of a rectangle. The shape will automatically match its container across different displays and window shapes. The best way to adopt the new design is to use standard controls, app structures, search placements and toolbars.

    But sometimes your app might need a bit more customization. Next, I'll share how to build custom Liquid Glass elements for your app. Maps is a great example for this use case with its custom Liquid Glass controls that float above the map content. I'm going to add similar custom Liquid Glass badges to the Landmarks app for each landmark that people visit. I'll start by creating a custom badge view with the Liquid Glass effect.

    To add Liquid Glass to your custom views, use the glassEffect modifier. The default glassEffect modifier places your content in a capsule shape with the Liquid Glass material below and highlight effects above. Text content within a glass effect automatically uses a vibrant color that adapts to maintain legibility against colorful backgrounds. Customize the shape of the glass effect by providing a shape to the modifier. For especially important views, modify the glass with a tint. Similar to toolbar buttons, only use this to convey meaning and not just for visual effect or branding.

    Just like text within a glass effect, Tint also uses a vibrant color that adapts to the content behind it. On iOS for custom controls or containers with interactive elements, add the interactive modifier to the glass effect.

    With this modifier, Liquid Glass reacts to interaction by scaling, bouncing and shimmering, matching the effect provided by toolbar buttons and sliders.

    So now that I have a custom badge, I'll bring multiple badges together to interact and blend with each other.

    To combine multiple Liquid Glass elements, use a GlassEffectContainer. This grouping is essential for visual correctness and performance. The Liquid Glass material reflects and refracts light, picking colors from nearby content. It does this by sampling the content around it. Using a glass effect container allows adjacent elements to share a sampling region, which avoids interference. Using shared sampling regions is crucial for performance too because it minimizes sampling passes.

    Besides controlling sampling, GlassEffectContainers also enable morphing effects. In the Landmarks app, I'm using the GlassEffectContainer to group my badges. Inside the container, I show a stack of badge labels when the state is expanded and I have a badge toggle to switch the state off and on. When I tap this button to expand my badges, I get this wonderful fluid morphing animation.

    Once I have my GlassEffectContainer, there are just 3 steps to building this animation. Step one, I render my labels and toggle using Liquid Glass.

    Step 2, I introduce a local Namespace. And step 3, I associate my labels and my toggle with that Namespace using the glassEffectID modifier so the system knows that these belong together.

    Now when I tap the button again, the award badges are reabsorbed beautifully.

    The Liquid Glass effect offers an excellent way to highlight the functionality that makes your app truly unique. So after that recap of the features built into the new design and the customizations available to you, here are a couple of other changes coming in iPadOS 26.

    iPadOS 26 brings changes that make iPad more powerful and versatile. Among these features is the new windowing system which includes a new way to resize windows. Every app that supports resizing now shows a handle in the bottom right corner just like on visionOS. Dragging this will start resizing the app into a window that floats above your wallpaper.

    In the past, it's been possible to prevent window resizing. Some of your iPad apps may be doing that. This is deprecated in iPadOS 26. Beginning in the next major release after iPadOS 26, all iPadOS apps must support resizing.

    In addition to resizing, iPadOS 26 also introduces a new way to multitask. With multitasking, there's an important consideration to keep in mind. At the core of multitasking are scenes. A scene represents an instance of your app's interface. For example, Mail lets people open a window for each of their mailboxes, helping them stay organized.

    Each open mailbox is displayed in its own scene.

    SwiftUI based apps automatically come with scene support. In UIKit based apps, scenes have been optional since they were introduced in iOS 13. Going forward, scenes are required. This is true on iPadOS as well as iPhone and Mac Catalyst. Beginning with the next major release after iOS 26, only apps that support scenes will run.

    One important distinction, while support for multiple scenes is encouraged, the only requirement is adopting the scene-based life cycle.

    There's a great guide online to help you add scene support to your app.

    Supporting resizing and multitasking are great ways to give people more flexibility in how they interact with your app. So this is the perfect time for you to elevate your iPad app to be more flexible and truly shine on the platform.

    Majo and I and all of our colleagues are so excited to share this new chapter of Apple design with you and we can't wait to see what you build with Liquid Glass.

    Next I’ll welcome my colleague Shashank to tell you about updates to Apple Intelligence. Shashank.

    Thank you, sir.

    Hey everyone, I’m Shashank. I’m an AIML Technical Evangelist here at Apple. Today I’m excited to take you on a quick tour of the latest updates in machine learning and Apple Intelligence, we announced at WWDC 25. And I'll show you how you can tap into these features to make your own apps even smarter.

    We're going to cover 3 things. First, I’ll give you an overview of intelligence features built right into the operating system, and how you can tap into them through system frameworks. Next, I'll show you how to integrate your apps more deeply across the system. And finally, I'll explain how Apple's tools and APIs can help you optimize and deploy custom machine learning models for on-device execution.

    Let's start with the big picture. Machine learning and Apple Intelligence are at the heart of many built-in apps and features across Apple's operating system. Last year, we brought generative intelligence into the core of Apple's operating systems with foundation models that power Apple Intelligence. This gave us Writing Tools, Genmoji, and Image Playground built right into the system, and you can integrate these features directly into your own apps. For example, if your app uses system text controls, you automatically get Genmoji support. You can also use the new APIs to make Genmoji appear in your text wherever you want.

    In Image Playground, people can generate images using chat GPT and explore new styles like oil painting or vector art. The Image Playground framework lets you present an Image Playground sheet right in your app, or you can create images programmatically using the Image Creator API.

    If your app uses standard UI frameworks to show text views, it already supports Writing Tools. With Writing Tools, people can refine what they write by rewriting, proofreading, or summarizing text all in a play, all in all in place. And these tools now come with even more capabilities. New in iOS, iPadOS, and macOS 26, people can now follow up on Writing Tool suggestions. After rewriting, you might ask it to make the text a little warmer, more conversational, more encouraging.

    Although Writing Tools appear where people select text, if your app is text heavy, you can make them even easier to find by adding a toolbar button.

    Similarly, in context menu, writing tool items are added automatically. If you use a custom menu or want to arrange these items differently, you can use the APIs to get standard items and place them wherever you like. In WWDC 25, we also expanded the range of languages supported by Apple Intelligence.

    Many of you have expressed interest in accessing the underlying language models that power Apple Intelligence features. Well, I'm now excited to say that you have direct access to them. Using The Foundation Models framework. The Foundation Models framework gives you direct access to the same on-device large language model that powers Apple Intelligence all through a convenient and powerful Swift API. Now, you can build advanced new features right into your own apps. For example, you can use this to enhance existing features, like offering personalized search suggestions. Or you can create entirely new experiences. Imagine generating a custom itinerary in your travel app, tailored on the fly.

    You could even use it to generate dialogues for characters in a game, all completely on device.

    I think that's pretty cool.

    Because the framework runs fully on device, users' data stays private and doesn't need to be sent anywhere.

    These AI features are readily available and work offline. No account to set up, no API keys to manage. It's all ready to go.

    And best of all, there's no cost to you or your users for making any number of requests. And most importantly, it's all built into the operating system, so there's no impact to your app size.

    It’s available in macOS, iOS, iPadOS, and visionOS. And runs on all Apple Intelligence supported hardware and languages. So, let's take a closer look at the framework and how it works. The Foundation Models framework gives you access to a 3-billion parameter model, which is a device scale model, and it’s optimized for on-device use cases like content generation, summarization, classification, multi-turn conversation, and more. It's not designed for world knowledge or advanced reasoning tasks. These are tasks for which typically you might still use a server scale LLM. Let's take a look at how this works.

    You can use the Foundation Models framework to send a prompt to the on-device large language model or LLM for short. The model can then reason about the prompt and generate text.

    For example, you could ask it to create a bedtime story about a fox. The model responds to the detailed imaginative bedtime story. This is the same general purpose model that powers features like Writing Tools across the system. Let's return to our personalized search example to see how you would implement this with the Foundation Models framework.

    Prompting the model takes 3 lines of code.

    First, we import the framework. Next, we created a language model session. And then send the prompt to the model. The prompt could come from you or the person using the app. You can even dynamically generate prompts based on user inputs. After providing the prompt, you get a response. By default, language models produce unstructured natural language output, as you can see here. This is easy for humans to read, but could be difficult to map onto custom views that you may have in your apps. To address this, Foundation Models framework gives you guided generation. Here’s how it works.

    First, we specify what our output should look like with struct.

    Search suggestions is a simple struct that holds a list of string search items. Now, this list is a lot easier to map onto your views, but it's not ready yet. To this struct, we apply the Generable macro. Generable is an easy way to let the model generate structured data using Swift types. Next, we specify guides.

    Guides let you provide descriptions and control values for your associated type. In this example, we want to generate 4 search terms as a list of strings. You can now pass it to the model using the generating argument. And the output now is a list of suggestion strings that can be easily mapped onto your views.

    Guided generation gives you control over the model, what the model should generate, whether they're strings or numbers or arrays or custom data structures that you define. Guided generation fundamentally guarantees structural correctness using a technique called constraint decoding. With guided generation, your prompts can now be simpler and focused on the desired behavior instead of providing format instructions in the prompt. And all of this also helps improve model accuracy and performance. In addition to what you provide to the prompt, the model brings its own core knowledge with its uh from its training data. But remember, the model is built into the OS and its knowledge is frozen in time. So, for example, if I were to ask you about the weather in Cupertino outside right now, there's no way for the model to know that information. To handle use cases where you need real time or dynamic data, the framework supports tool calling.

    So for example, uh, Tool calling also lets you go beyond text generation and perform actions. You can use tools to grant the model access to live or personal data like weather or calendar events. You can even cite sources so people can fact check its output, and tools also can take real world actions in your app, on the system, or out there in the real world. Next, let’s go over how these foundation models can power up your apps with brand new capabilities.

    Consider a journaling app. Instead of providing generic journaling suggestions, foundation models can generate suggestions that feel personal based on past entries and calendar events. And because everything runs on device, people’s data, sensitive data like health info, stays private.

    Or think about an expense app. No more tedious manual entry. With foundation models, your app can pull spend data details directly from text. You can also go a step further and combine it with a vision framework to pull data from a receipt photo or screenshot or processed directly on device.

    In productivity apps, people can instantly rewrite dense notes for clarity or summarize meeting transcripts into actionable bullet points. This can be a huge time saver helping people focus on what really matters.

    By combining with speech recognition with foundation models, the app can enable natural language search. For example, you can say, show me a pet-friendly 3-bedroom house. Your app extracts the detail from your speech and uses tool calling to make search simple, natural, and fast.

    And this is just the beginning. There's so much more you can create with these capabilities.

    You can use the models for many common language tasks such as content generation, summarization, analyzing inputs, and many more.

    For advanced ML practitioners in the audience, if you have a specialized use case, you can train your own custom adapters using the adapter training toolkit. But keep in mind, this comes with significant responsibilities because you will need to retrain it as Apple improves model over time. And for details, you can check out the developer website.

    When you're ready to start experimenting, a great place to start is a new Playgrounds feature in Xcode.

    Just add The #Playground macro, just use #Playground in any code file in your project and start writing your prompts. The model’s response will immediately appear on the canvas on the right, just like SwiftUI previews. In this example, the canvas shows both the unstructured and the guided generation model response. Playground is a great way to experiment with Foundation Model framework. We recommend trying out several variety of prompts to find what works best for your use case. And Playground helps you iterate quickly.

    So in summary, the Apple Foundation model is an on-device model, specifically optimized and compressed to fit in your pockets. Due to its smaller size, the model has limited world knowledge. You can use tool calling to bring in real world data. Use Playgrounds feature, uh, making, it makes it easy for you to evaluate and test your prompts, as you saw in the example, and. And, as you build your app using foundation models, please consider sharing your feedback through the Feedback Assistant, which will help us improve the model and the APIs.

    That was the Foundation Models framework. We can't wait to see all the amazing things you'll build with it.

    Alongside foundation models, you also have access to other powerful machine learning APIs. Each tailored for a specific domain. There’s Vision for understanding images and video. Natural Language for working with text. Translation for multi-language text, Sound Analysis for recognizing categories of sounds and Speech for transcribing words in audio.

    Let’s start with Vision. Vision has over 30 APIs for different types of image analysis, and today, Vision is adding two new APIs. The new recognized documents request API for structured document understanding and detect camera lens smudge uh detect camera lens smudge request API for identifying photos taken with a smudge lens. Let's explore this in a little more detail.

    This year, Vision is bringing improvements to text recognition. Instead of just reading lines of text, Vision now provides document recognition. It can group document structures, making it easy to process and understand documents. For example, if you have a handwritten sign-up form, instead of manually typing out names and contact details, Vision passes the table directly for you. And rows and cells are grouped automatically, so you spend less time parsing data. That's pretty cool.

    Also, new vision this year. Now, uh, even, uh, we even detect smudges on camera lenses, helping you ensure that people capture clear and high quality images.

    For example, someone scanning a document might accidentally smudge their lens with their finger.

    This leads to blurry images that are tough to process.

    The smudge detection API spots this. So you can prompt the user to clean their lens or retake the photo, ensuring you always process quality images. And, that was the Vision framework. Now, let’s move on and take a look at Speech framework.

    This year, we introduced the new Speech Analyzer API, an entirely on-device speech to text API that you can use with just a few lines of code. The new model is faster and more flexible than ever. The SpeechAnalyzer model already powers features across notes, voice memos, journals and more. In addition to SpeechAnalyzer, SpeechTranscriber also gets an update with a brand new model that supports broad spectrum of use cases. We wanted to create a model that could support long form conversational cases where some speakers might not be close to the mic, such as recording in a meeting. We also wanted to enable live transcription experiences that demand low latency without sacrificing accuracy or readability while keeping the speech private. Our new on device model achieves all of that.

    You can now support the same use cases in your own applications.

    The best part is that you don't have to procure or manage the model yourself. The assets are downloaded when needed, live in system storage, and don't bloat your app size or memory, and they update automatically as Apple improves them. SpeechTranscriber can, uh, currently transcribe these languages with more to come.

    And that's a wrap on platform intelligence.

    Next, Let’s look at even more ways you can weave system features into your app.

    App Intents lets you integrate app's core functionality across people's devices. With App Intents framework, people can easily find and use what app offers even when they're not in your app. It deeply connects your app's actions and contents with system experiences. This is great because your features show up in places like Visual Intelligence, Shortcuts, and Spotlight. App intents aren't just another framework, they expand your app's reach across the entire system. Let's start with Visual Intelligence. Visual Intelligence builds on Apple Intelligence and helps people explore their surroundings. First introduced in iOS 18, people could point the camera to learn about a cafe or a nice sneaker they spot on the go. In iOS 26, this now works on the iPhone screenshot screenshots too, so people can search or take actions on content, uh, uh, something they like right on the screen.

    For example, someone can take a screenshot of a landmark. And then run an image search. The system shows a search panel.

    They pick the app they want to see results from.

    And tap to open the app right into the relevant page.

    If they don't see what they need, they can tap the more results button.

    And your app opens in its search view. If your app offers image search capabilities, integrating with Visual Intelligence provides a feature provides a powerful new way for people to discover and engage with your content, even when they are outside of your app. This integration extends your app search functionality to the system level, allowing people to seamlessly transition from real world object or image relevant uh to relevant information and actions within your app.

    Take a shopping app, for example. Today you open the app, navigate to the image search, and start a search. With Visual Intelligence, you can snap a screenshot of the bag or dress you love from social media and instantly launch the app search. This removes friction, makes engagement feel feel more natural.

    Next, We have Shortcuts. Shortcuts let you automate repetitive tasks and connect together functionality from different apps. This year, we’re bringing the power of Apple Intelligence into Shortcuts with new intelligent action features. With App Intents, your app's actions can combine with Shortcuts, letting people build powerful customized workflows. One highlight is the Use Model action. People can tap right into Apple Intelligence models to get responses that feed into the shortcut. Passing text or formatting data can be as simple as writing a few words. People can choose from a large server-based model on private cloud compute to handle complex requests while protecting their privacy. Or the on device model to handle requests without the need for a network connection.

    They can also choose ChatGPT if they want to tap into broad world knowledge and expertise.

    This Use Model action is just one of many new intelligent actions alongside Image Playground, Writing Tools and more in iOS 26.

    Here are some other use cases. For example, people can use a model to filter calendar events for a trip.

    Or summarize content on the web, like grabbing the word of the day.

    To leverage Use Model action, first expose your app's functionality through App Intents. This allows people to integrate app directly into their model driven Shortcuts. People using Shortcuts can explicitly select the model's output type, such as rich text, list, or dictionary. As a developer, you ensure your App Intents are prepared to receive these output types. People can pass content from your app into the model. You define this content as app entities using the App Intents framework. This allows the model to reason over your data from the app. These integrations make Use Model actions a powerful tool for extending your app's reach.

    In summary, App Intents help you integrate your app throughout the system. You can use it to bring image search to Visual Intelligence. And expose expose content from your app to Shortcuts. And allow people to run actions from your app directly from Spotlight on Mac.

    So far we've covered how to tap into ML and AI powered features built into the system.

    Next, let's explore how to bring any model to a device and all the considerations that come with it. This can feel a bit complex, but, It’s made easy with Core ML. All you need is a machine learning model in the Core ML format. These model assets contain description of the model inputs, outputs, and its architecture, along with its learned parameters or weights.

    There are many publicly available models with a wide variety to choose from. For example, there are models like Whisper for audio, Stable Diffusion for image generation, Mistral for language processing. These are all available and optimized for Apple devices.

    Where do you get them? You can download them directly from Apple's page on Hugging Face or from developer.apple.com. All of these models are optimized specifically for Apple silicon and come in the ML package format, making them easy to integrate right into your Xcode project. Here's how you would use them.

    After you download a model, You can simply drag and drop it into Xcode to get started. Xcode not only recognizes your model, but also generates Swift code for interacting with the model based on the model's inputs and outputs. Xcode also presents all the metadata right there after you import the model, giving you insights into the model's structure and functionality.

    If you have a computationally intensive model, it's important to ensure that the model prediction performance meets the desired latency for a good user experience in your app. With a few clicks, Xcode can generate performance report that summarizes load time, on-device completion time, and prediction latency, which is the time it takes to get responses from the model.

    You can also check if the model is supported on the neural engine to deliver even better performance and latency. Now, what if the models uh you want aren’t available in the Core ML format? Let’s say you or your data science team uses a framework like PyTorch to train and fine-tune custom models for your custom needs. Once you're happy with the model's performance, it's time to integrate it into your app. That’s where Core ML Tools comes in. It's an open source Python package with APIs to optimize and convert your models from a variety of open source frameworks like PyTorch into Core ML compatible format.

    Finally, for those of you who are on the front lines of AI research, you can fully leverage the power of Apple silicon to prototype custom solutions using the latest research models. To keep up with the current frontier of exploration, you need the ability to run large models, tinker with unique architectures, and learn and collaborate with the open ML community. We have advanced tools and resources to help you explore this frontier.

    MLX is an open source ML framework purpose built for Apple silicon. It's a flexible tool that can be used for basic numeric computation all the way up to running larger scale frontier machine learning models on Apple devices. You can use MLX to generate text with large language models, generate images, audio, or even video with the latest models. You can also use it to train and fine tune and customize machine learning models directly on your Macs.

    MLX can run state of the art ML inference on large language models like Mistral with a single line command line call right on your Mac. For example, here it is generating Swift code for a quicksort algorithm using an LLM.

    MLX allows you to stay in step with state of the art research thanks to open source community working hard to make these models available with, uh, available on MLX.

    All of MLX software is open source under permissive MIT license. The core software is available on GitHub, along with several examples and packages built using Python and Swift APIs. MLX also has an active community of model creators on Hugging Face. Many of the latest models are already on the MLX, uh, MLX Community Hugging Face organization, and new models are uploaded almost every day.

    In addition to MLX, if you use other frameworks like PyTorch or JAX, they’re also accelerated on Apple silicon using the Metal API. This means you can keep using the tools you already know and love for model exploration, and when you're ready to use these models in your apps, you can use Core ML to deploy them.

    That's everything new in machine learning and Apple Intelligence. Based on your needs and experience with machine learning, select the frameworks and tools that are best suited for your project, whether it's fine tuning an LLM, optimizing computer vision for Vision Pro, or tapping into ML powered APIs. We've got you covered.

    And all of this is optimized for Apple silicon, providing efficient and powerful execution for your machine learning and AI workloads.

    Now is the perfect time to bring ML and AI to your apps on Apple platforms. Try out the foundation models. Use Xcode Playgrounds to get started. Add Image search to Visual Intelligence and experiment to see what's possible. I'm genuinely excited to see what you'll build. Thank you, uh, and so thank you so much for joining us today. Now.

    Back to Leah.

    Wow, uh, I'm already getting ideas on how to use the foundation model. Now I’d like to bring back Allan on stage to talk about the latest updates to visionOS.

    Thank you. Hello everyone, how are you? That's great. Well thank you all for joining us, uh, and uh thank you to everyone online as well. Uh, my name is Allan Schaffer. I’m a Technology Evangelist here in developer relations and I’m just, I'm delighted, uh, to tell you all about visionOS 26 and the updates from WWDC. So we just had WWDC last month, right? Uh, and it was a huge update for visionOS. this year we have 14 videos dedicated to visionOS and all of them are designed to help you to discover all the latest spatial computing updates we covered everything from, uh, using metal with compositor services. We had updates to SwiftUI. Enhancements to RealityKit, third party accessories, there's new video technologies and a ton more and really just all of it is a lot more than what I can cover here in the just in the next half hour, um, so the game plan today is just to give you the highlights, uh, and some of the biggest updates of what's new. So with that in mind, uh, let's dive into visionOS 26 and have a look at our agenda. Uh, so first I will cover some of the volumetric features in SwiftUI that are gonna make your app feel much more, uh, immersive. Then there's new system features as well. For example, Apple Intelligence, and there's also new ways to make your app's content remain persistent in the room. There's also uh new interactions and accessory support for finer control for button input and haptic feedback. Uh, after that I'll cover some updates to immersive media, uh, and there’s new, uh, video formats that are uniquely suited to Vision Pro There’s also many new sharing and collaboration features this year, and then finally I'll share what's new in the enterprise APIs for those of you who are here in, uh, developing enterprise apps. But all right, so let's get started, uh, with the volumetric features that are coming uh in SwiftUI.

    Now I'm sure you've heard all of this before, uh, but I think it bears repeating, uh, that the best way for you to build apps, a great app for Vision Pro and to leverage our native tools and technologies, is with SwiftUI uh and now SwiftUI gives you some new ways to build your apps in 3D, uh, and to make them even more immersive. Now one of the main additions to SwiftUI and visionOS 26 relates to the layout of your content and how many of the very familiar SwiftUI layout tools and view modifiers have now added first class 3D analogs to give your views depth and a Z position and then to act on those. So if you're familiar with developing 2D apps in SwiftUI, uh, now you can create very rich 3D layouts, uh, in the same way that you're probably already very used to.

    Couple I’d like to highlight uh are the new depthAlignment and rotation3DLayout, uh, so I’ll start with depthAlignment. So this is just a very easy way to handle composition uh for common 3D layouts. For example, here we’re using the front depthAlignment to automatically place this name card at the front of the volume that contains the 3D model here, just very simple.

    Another addition is this 3D, uh, excuse me, rotation3DLayout modifier. So the thing to notice here is notice how the top airplane model makes way for the middle one to rotate comfortably. Really what's going on there is and what rotation 3D layout does is to allow you to rotate geometry within the layout system, and it communicates those rotations back to your views so that your app can react.

    A few other updates to mention just very briefly uh with presentations you can enable transient content uh like this content card about the trail here and these are now able to be presented within volumes or as ornament volumes. So this can be done with menus, tool tips, alerts, sheets, pop overs, and also presentations can break through 3D content and stay visible when they're being occluded, so this helps them to look great just in any context.

    And then again another feature so normally uh Windows and volumes act as containers for your app's UI and its content in the shared space. But with a new feature called Dynamic Bounds Restrictions you can allow your app's content, sort of like the clouds that you see here to uh have objects that peak outside the bounds of the window or it's volume and it just helps your content to appear more immersive.

    Then changing gears, not my hands, but, uh, changing gears now, so gestures and object manipulation are easier to adopt now. You know it's always important for interactions with virtual content to feel natural and just to to mimic the real world and so objects now can be manipulated of course with simple hand motions um reoriented with one or both hands they can be scaled by pinching and dragging with both hands and you can even pass an object from one hand to another. So now it's built in, so there's no need for you to implement a complicated set of gesture handling.

    And you can apply this behavior to objects in your app with SwiftUI or RealityKit depending on whether the object is a custom view or it’s a RealityKit entity, so you can use the manipulable view modifier and SwiftUI or add the ManipulationComponent, in RealityKit or one more if you’re using a QuickLook3D view it’s already built in. You just get it for free.

    By the way, speaking of RealityKit, there's now a much deeper alignment between SwiftUI, RealityKit, and ARKit, especially with a ton of improvements that just simplify interacting with RealityKit from your SwiftUI code. So just for for instance, RealityKit entities um and their animations are now observable, this makes it much easier to observe changes uh in your SwiftUI views. There's an improved coordinate conversion API. You can write gesture handlers and SwiftUI and attach those gestures directly to reality kid entities and then now Model3D can do a lot more now like it can play animations, it can load USD variants, or it can switch between uh different configurations in your models.

    By the way, this example that I've been using, uh, to show you these new APIs, this is available for you to go and download, uh, it’s called Canyon Crosser and it’s available on developer.apple.com.

    All right, continuing on our agenda into the system features, um, so just a moment ago Shashank told you, about many of the advancements in Apple Intelligence and machine learning across all of our operating systems, and those capabilities are an integral part of visionOS, uh, as well.

    One of the most important changes for machine learning in visionOS is that we're now providing APIs with access to the neural engine without needing any special entitlements, so that's a big deal. This is gonna let you run your own models right on the device through the neural engine rather than uh on the CPU or the GPU only.

    And along with that we've also brought the new Foundation Models framework to visionOS so you can get direct access to things like the larger language model that's at the core of all those uh Apple Intelligence features and it is the same framework is on our other platforms so you can give prompts to the LLM it’s right on your device you can generate uh or create structured output from guided generation. There's tool calling to the model to let that model fetch data or perform actions that you define your code, uh, and so on.

    Uh, in addition, we’ve also introduced a new speech to text API, uh, for iOS, MacOS, but also visionOS, and it’s called SpeechAnalyzer. And it's already being used all over in the system it's in live captions and FaceTime, uh, audio transcriptions in the Notes app, but it's also possible now for you to bring that into your apps as well. Uh, along with that, it includes a new speech to text model, uh, called SpeechTranscriber that Shashank mentioned, which is just, it's faster, it's more flexible, makes it really ideal for things like media captioning, uh, and of course the whole thing runs entirely on your device.

    So, OK, moving on, let's talk about Widgets. So Widgets are lightweight app extensions uh that just offer information at a glance, uh, like a weather forecast or a calendar event. And really Widgets are all about providing people with glanceable relevant personalized information just bringing contextual information up to the surface without needing to go and open an app. And as a spatial platform, visionOS enables Widgets to take on a whole new form. They become three dimensional objects that feel right at home in your surroundings, and many aspects of their, excuse me, of their appearance can be customized to make them fit right into your space.

    Now Widgets can be placed on horizontal or vertical surfaces like on a wall or on a shelf, a desk, a countertop, etc. and they remain anchored to that location and persist across sessions. So if you leave the room and come back later or you take the device off, you put it on later, they'll still be there they just become a part of your space.

    Now Widgets uh have multiple size templates to choose from, but on visionOS those sizes take on real world dimensions, giving them a very physical presence uh in the room with you. So you should be thinking if you're developing with Widgets about where the widget might live, will it be mounted on a wall, sitting on someone's desk, uh, etc. and then choose a size that will feel right for that context.

    Another feature that's unique uh to visionOS is proximity awareness. So that means that Widgets can change, can adapt their appearance and their layout based on how far away someone is. For example, this is, uh, in this sports widget here as we move closer, more detailed information becomes visible.

    By the way, there's good news which is that if your iPhone app or your iPad app already includes Widgets, you're already off to a great start because if your app is running on in compatibility mode, your Widgets will just carry over to visionOS and they’ll automatically adopt, uh, adapt to the new spatial qualities.

    All right, so those are a couple of the new system features in visionOS and now I want to move on to interactions. and as you know, hands and eyes are the primary input methods for Vision Pro, and we've built visionOS so that you can navigate its interfaces entirely based on where you're looking and, uh, with intuitive hand motions. And by the way, we've improved hand tracking in visionOS 26 to be as much as 3 times faster than before to be 90 hertz now, and this can make your apps and games feel even more responsive so there's and there's no code needed for this. It's just built in because it runs at this speed.

    We've also added a new way to navigate web content and in your apps, uh, using just your eyes uh we call it look to scroll. It's just a nice improvement kind of a very lightweight interaction that works right alongside scrolling with your hands, and you can adopt this into your apps as well, uh, with APIs in SwiftUI and UIKit.

    visionOS also supports Bluetooth keyboards and trackpads in case that kind of input is right for your use case, uh, and game controllers as well, which can be appropriate. Maybe you need joysticks and buttons and a D-pad or maybe you’re bringing a game from another platform and you've already designed all your gameplay around controller input.

    But now in visionOS 26 we've added another option and that is spatial accessories. They give people finer control tracking in six degrees of freedom, tactile buttons, uh, and haptic feedback, and we've added support for two new spatial accessories via the game controller framework. So the first of those, you see here is the PlayStation VR 2 Sense controller from Sony. And of course these are great for high performance gaming and other fully immersive experiences. Um, it has buttons, joysticks, a trigger, but most importantly it tracks its world space position and orientation in six degrees of freedom.

    The other new accessory is the Logitech Muse, and it's great for precision tasks like drawing and sculpting. it has 4 sensors that allow for variable input on the tip and the side button as well as haptic feedback. So here's an example. This is uh Spatial Analog from Spatial Inc. It's a collaborative design tool uh that works with USD models and other 3D assets, and they're using the muse here to annotate the dimensions of this virtual chair.

    Now something that they're taking advantage of is the richness of the data coming from the accessory so position and rotation is tracked using actually a combination of the on board cameras on Vision Pro as well as sensors that are built into the accessories, and then ARKit also gives you access to additional data like which hand is holding the accessory, the velocity of the accessory as it move in world space, uh, and it’s rotational movements.

    Now a spatial accessory can have haptics as well, which is just always a great feedback mechanism. It makes the interactions feel very realistic.

    and then something else to call your attention to is that it's possible to anchor virtual content to the accessories themselves using uh anchor entity in RealityKit. In a game, for example, you could have a game piece that could be anchored to a controller's grip um while someone's moving it from place to place and here is an example of exactly that. So this is Pickleball Pro or PicklePro, excuse me, from Resolution Games they're playing pickleball, and they've anchored a virtual paddle right onto the grip of the controller and the ball was to. Uh, and it just works it works incredibly well, uh, and the positional tracking is very, very precise.

    But OK, so that’s spatial accessories, you can support special spatial accessories to give people more fine-grained control over their input, and it's great if you can provide haptic feedback to allow players uh and other people to feel these virtual interactions. And all of this is being supported through the game controller uh framework along with RealityKit uh and ARIt.

    OK, uh, next, let's talk about immersive media, um, and I actually wanna start with photos. So visionOS can now present photos and other 2D images in a whole new way. We call them spatial scenes. So spatial scenes are 3D images with real depth. They get generated from a 2D image like this was. There's sort of like a diorama version of the photo with motion parallax to accentuate the the spatial scenes depth as the viewer is moving their head relative to that scene.

    Here's a real world example of that from Zillow. So these are just regular photos taken by realtors, um, and it shows how this effect can be just applied to any image, uh, photographs, images that come off the web, old Polaroids, whatever you've got, uh, and it really gives the sensation of these, uh, images having stereoscopic depth. OK, so that's first that's spatial scenes.

    Now turning to video, so Vision Pro, uh, provides a comprehensive spectrum, uh, or supports a comprehensive spectrum of media formats. So that includes 2D video, 3D stereoscopic movies, spatial videos, and Apple Immersive Video, and now with visionOS 26 we've added support for three additional media types for 180, 360, and wide field of view, uh, videos. And all of this now provides this comprehensive suite of options for immersive media, but it also means that you, as developers have many different formats and different kinds of experiences to be considering uh for your apps.

    So to support that, uh, and to support you, we've introduced a new QuickTime movie profile called Apple Projected Media Profile or APMP um, APMP uses the metadata in your video files to identify what kind of media they represent and then to indicate a projection of that content to the video player. um, so let me show you an example of this to illustrate the concept. So over here on the left, here's some wide FOV media that's captured with uh an action camera.

    um, You can see that the the just from the still image from of the frame it shows some curvature um due to the fish eye lens that gave the wider field of view. So now let me set these two in motion.

    Over on the right, AP&P is enabling our media frameworks to project that video onto a curved surface with the viewer's eyes placed right at the center of the action. And since the curve matches the camera's lens profile, it essentially undoes the fish eye effect. So inside your Vision Pro, you as a viewer will see straight lines as being straight, but even over on the sides of the image.

    Right. So APMP has a ton of utility, and to support it we've added automatic conversion to APMP from a number of cameras that capture 180 and 360 and wide FOV videos from Canon. We have these GoPros, the Insta 360s, uh, and so on.

    Also there's a ton of existing 180 and 360 content already out there, and so we've updated our AV convert command line tool on macOS to support converting 180 and 360 content to APMP.

    I want to circle back for a moment, uh, just to mention Apple Immersive Video. So Apple Immersive Video is a format designed for the Apple Vision Pro uses 180 degree 3D 8K video with spatial audio, uh, to create just a deeply immersive, uh, viewing experience. And now in visionOS 26 we're opening up Apple Immersive Video to developers uh and to content creators.

    So here is what that pipeline would look like for a content creator now so you can capture Apple immersive video using uh Black Magic’s URSA Cine Immersive camera, which is an amazing camera, let me say, um, then do your editing and grading in DaVinci Resolve. You can preview and validate, uh, using the new Apple Immersive Video utility which is up on the App Store. Um, and then from there it depends on your destination, but let's say you're streaming, you can create segments and create and compressor, for example, uh, for distribution, uh, via HTTP Live Streaming.

    Now for developers of Pro apps who want to create their own tools to work with Apple Immersive Video, there's a new framework in macOS and Vision OS called Immersive Media support. It just, it lets you read and write Apple immersive content programmatically, and we have some great sample code, uh, called Authoring Apple Immersive Video which can help you get started.

    OK, and then hopefully, all of this leads then to playback perhaps in your own apps and so Apple Immersive Video can be played now in visionOS 26 by all our media playback frameworks like RealityKit, AVKit, Quick Look, WebKit, etc. so you can integrate it into whatever kind of experience here, uh, you may be building.

    Alright, so that is immersive media, starting with spatial scenes that gave 2D photos that real depth and parallax. Then I mentioned the new APMP profile and it's auto conversion tools and then as I say for creators and pro app developers you can dive into Apple Immerse video to deliver just the ultimate cinematic experience to your audiences.

    All right, next up, uh, let's dive into what's new in sharing, and collaboration really one of the cornerstones of spatial computing is the ability to connect people, uh, and one of the way that we do that is with Spatial Personas. So Spatial Personas are your authentic spatial representation when you're wearing a vision pro so that other people can see your facial expressions and your hand movements in real time. And now in visionOS 26, Spatial Personas are out of beta and they have a number of improvements to things like hair, your complexion, expressions, representation, and a ton more.

    Also in visionOS 26 we’ve extended SharePlay and FaceTime with a new capability called nearby window sharing. So nearby window sharing lets you, what it does is let people who are located in the same physical space uh share an application and interact with it together. So this app is uh Defenderella. This is by a Rock Paper Reality it’s a multiplayer tower defense game and it just comes to life, uh, right there in your space. It's super fun to play, uh, with other people nearby.

    Now all of this starts with a new way to share apps. So now every window has a share button down next to the window bar that opens up this share window and giving that a tap shows you the people nearby and you can easily start sharing with them.

    And then when someone starts sharing the system is gonna make sure that your app appears in the same place uh for everyone there uh with the same size and since the app is just right there with all of you in your space, you can have a conversation you can point at things, you can interact with the app as if it were a physical object right there in your room and people can even hand content between each other.

    Anyone who's involved can interact with the shared window, move it around, they can resize it, they can even take it and snap it to a wall or another surface in the shared surroundings, uh, in the real world there. And if someone holds on to the Digital Crown to recenter, well, the app will move back into a good place for everyone. And now if somebody is pointing to something and they end up occluding the virtual window, the content will fade out to make sure that that person stays visible uh for the other for the other people.

    You can even let people who uh or let's say it's a furniture app you could even let people place content while they're maintaining their shared context so this is done with something called shared world anchors, you know, for example, if you had an app where you're putting virtual furniture in the room, it'll just show up in the same place for all of the participants nearby.

    Also to mention this shared context isn't just for those people who are in the same room, you can also invite remote participants uh via FaceTime, and if they're using a Vision Pro they'll appear as Spatial Personas with you.

    You know, another thing to mention is just since Apple Vision Pro is part of our overall Apple ecosystem, it's very easy for you to build collaborative experiences across devices. Let me show you an example of that. So this is Demeo, uh, it’s a cooperative dungeon crawler from Resolution Games, and it's just, it's really fun to play this with a group. Um, and you see one of the players here is having an immersive experience, uh, using a Vision Pro, uh, and her hand, uh, and her friends, excuse me, have joined in on the fun on an iPad and from a Mac.

    So very cool experience. All right, so great, so that's just a very quick update now about SharePlay, uh, and nearby window sharing. So when you're bringing SharePlay to your app, a few tips first just ensure that you're designing the experience for both nearby and remote participants. Also, uh, build your app with Spatial Personas in mind, and think about experiences that can let people connect uh across Apple platforms.

    All right, and then my final section. So last year uh we introduced the first set of enterprise APIs for visionOS. This was things like access to the main camera. There's barcode scanning, there's some extra performance headroom, etc. uh, and since then that team's been working really hard, uh, to bring even more capabilities for enterprise developers. So here's some of what's new in visionOS 26 for those of you who are enterprise developers. one thing to mention is that because these features offer such deeper device access. That access to the APIs are managed through uh an entitlement and a license that gets tied to your developer account. But OK, so let's go through them, uh, starting with, uh, greater camera access. So previously, uh, an entitled enterprise app could access the video feed of the left side uh main camera, the one for your left eye, but now we've expanded this access to the left or right cameras individually or to both, uh, for stereo processing and analysis. So if you're already familiar with this, this is the camera frame provider in API that's in the ARKit. And by the way, earlier I mentioned that apps now have access to the neural engine, so you can kind of imagine putting all this together uh and and running the camera feed through your own custom machine learning model which is then running on the on the neural engine.

    Also new in visionOS 26 we’ve introduced a new feature that lets people select a specific area within their real world view and get a dedicated video feed of that area in its own window. Uh, it's a new SwiftUI view and we call it camera region view. So here's an example of that. I'm gonna position a window over uh an area down to the left that I want to capture um over that pressure gauge uh and now I can keep an eye on that while I'm doing other work or I can even share uh this video feed with a remote participant.

    By the way, speaking of sharing, um, so a moment ago I covered nearby experiences with SharePlay, but some enterprises are not currently leveraging FaceTime or SharePlay maybe they have custom networking or some other requirements and so for them we've introduced new APIs in ARKit for establishing a shared coordinate space with people who are in the same physical space. Uh, it's called shared coordinate space provider. and what this does is make it possible now for multiple participants to align their coordinate systems and exchange collaboration data and exchange world anchors over their local network. It's just a great way to collaborate on projects.

    Next is window follow mode. So sometimes when you're working on a task you need to move around, but perhaps you need to do that while you're keeping an eye on a dashboard or referring to a set of instructions repair instructions, or whatever the case may be, maybe training instructions.

    And follow mode makes this possible um by letting people choose a window that will stay with them as they move from place to place. By the way, you don't have to be walking when you do this, just moving around, it will stay right with you.

    And uh the last item I have is all about data privacy. So a lot of enterprise apps handle sensitive information like financial data, patient records, or something else that's proprietary, and they just wouldn't want that information to be recorded or to be visible uh if someone began screen sharing. Um, and so now apps can actually hide a view's contents from screenshots, uh, and during screen mirroring, etc. using the content, uh, excuse me, contentCaptureProtected view, uh, it's a view modifier for SwiftUI. Now that content remains perfectly visible to the person who's wearing the device, but it's just being restricted from screen captures, recordings and, you know, mirrored views and and other shared views.

    But all right, so all of these features join into our fleet of enterprise APIs letting you create uh even more powerful enterprise solutions and spatial experiences for your in-house apps.

    And that brings me to the end, um, so these are some of the new, uh, capabilities coming, uh, with visionOS 26. and just to repeat it, I’ve covered a lot here, but it's been very shallow and there are a lot, many, many more features and APIs for you to explore. A lot of them are in the session videos, so please check those out, uh, the videos that we published from WWDC. And also this year we've published a lot of new visionOS sample code uh and it’s all available for you to download on developer.apple.com. All right and with that, uh, back to Leah. Thank you.

    Thank you.

    Wow, there are just so many cool updates to visionOS. Well, thank you all for taking the time to join us and recap the biggest updates from WWDC. We covered a lot today. We started out with an overview of the new design system and how to use Liquid Glass thoughtfully in your apps so they feel right at home on iOS 26 and macOS Tahoe. We also shared why building with native UI frameworks like SwiftUI, UIKit, and AppKit make it easier to take advantage of the new design, bringing that clean interface and consistency across platforms.

    Then we dove into the latest with Apple Intelligence. You can get started with AI powered features like Writing Tools or Images Playground, tap into the on device foundation models, expand the reach of your app with App Intents, or even leverage the power of Apple silicon with your own models.

    And finally we explored visionOS 26. New volumetric features in SwiftUI makes apps feel even more immersive, and there are new ways to build collaborative experiences using SharePlay, immersive media formats to create amazing viewing experiences, and ways to control Apple Vision Pro with spatial accessories. So as I mentioned earlier today, these topics are just the tip of the iceberg. Whether you want to go deeper on some of the topics discussed today or explore others that we didn't have the chance to cover, the Apple developer website has you covered with plenty of documentation, videos, and sample code.

    We're also gonna send out an email later today with links to resources about everything we discussed today so you can learn more and dive deeper on these topics. And the Worldwide Developer Relations team is so excited to engage with all of you over this summer and fall, we're excited to help you adopt these new systems and technologies in your apps. So for details on upcoming opportunities to meet with us, check out the Meet with Apple and Hello Developer newsletters and visit us on the developer app and website. And to those online, thank you so much for joining in today, and we hope you get the chance to meet you again whether here in a developer center or online. And for everyone here with us in Cupertino, please join us in the lobby for some refreshments and conversation with Apple engineers and designers. Thank you all for joining and I look forward to what you build next.

Developer Footer

  • Videos
  • Meet With Apple
  • Explore the biggest updates from WWDC25
  • Open Menu Close Menu
    • iOS
    • iPadOS
    • macOS
    • tvOS
    • visionOS
    • watchOS
    Open Menu Close Menu
    • Swift
    • SwiftUI
    • Swift Playground
    • TestFlight
    • Xcode
    • Xcode Cloud
    • Icon Composer
    • SF Symbols
    Open Menu Close Menu
    • Accessibility
    • Accessories
    • App Store
    • Audio & Video
    • Augmented Reality
    • Business
    • Design
    • Distribution
    • Education
    • Fonts
    • Games
    • Health & Fitness
    • In-App Purchase
    • Localization
    • Maps & Location
    • Machine Learning & AI
    • Open Source
    • Security
    • Safari & Web
    Open Menu Close Menu
    • Documentation
    • Sample Code
    • Tutorials
    • Downloads
    • Forums
    • Videos
    Open Menu Close Menu
    • Support Articles
    • Contact Us
    • Bug Reporting
    • System Status
    Open Menu Close Menu
    • Apple Developer
    • App Store Connect
    • Certificates, IDs, & Profiles
    • Feedback Assistant
    Open Menu Close Menu
    • Apple Developer Program
    • Apple Developer Enterprise Program
    • App Store Small Business Program
    • MFi Program
    • News Partner Program
    • Video Partner Program
    • Security Bounty Program
    • Security Research Device Program
    Open Menu Close Menu
    • Meet with Apple
    • Apple Developer Centers
    • App Store Awards
    • Apple Design Awards
    • Apple Developer Academies
    • WWDC
    Get the Apple Developer app.
    Copyright © 2025 Apple Inc. All rights reserved.
    Terms of Use Privacy Policy Agreements and Guidelines