Streaming is available in most browsers,
and in the Developer app.
-
Platforms State of the Union
Take a deeper dive into the new tools, technologies, and advances across Apple platforms that will help you create even better apps.
Resources
Related Videos
WWDC21
-
Download
Welcome to the WWDC2021 platform state of the union. WWDC is a time when we come together as a developer community to look at what the near future holds for our platforms. I also want to reflect a bit on how the work you did helped all of us to get through this past year. Your apps and your creativity have enabled people to find new ways to keep things moving under extraordinary circumstances. There are so many great examples of developers making a difference, and we know there's much more we can do to help you make the world better faster. So this year we're delivering tools, technologies, and APIs designed to help you accomplish more. And we'll be talking today about three big areas where we're making that happen. First, we'll talk about the things that help you build great apps: Xcode and Xcode Cloud, Swift, and our UI frameworks. Then we'll look at how Apple technologies can help you build apps that make it possible with augmented reality and our graphics technologies for users to see the world in new ways. Finally, with new features like Focus, the Screen Time API, Widgets, and Share Play, we'll see ways in which the apps you build can help users better connect with each other and prioritize what's most important to them. Let's start with developing apps. To tell you more, here are Alison, Rhonda, and Andrew.
Developing an app today is a pretty sophisticated process and you rely on your tools to keep you focused and effective. The most essential part of the process is coding, but building a quality app today involves a number of specialized steps and tools. You need to test your code across various configurations. Your team reviews your code, and you integrate changes into a shared repository. You deliver to beta testers. And based on their feedback, you constantly refine your app. All of these steps are important and it’s important to get all of them right. You often have to jump between different apps, websites, and services to get everything done. All of this context switching disrupts your focus and pulls you away from your code. It’s time to do something about that, to remove the friction and bring everything together, so you and your team can focus on creating great experiences. That’s why we created Xcode Cloud: a new continuous integration and delivery service built right into Xcode and hosted in the cloud. It helps you manage every stage of your development process, and makes it easy to get the important things right. Xcode Cloud was designed and built from the ground up to support development for all Apple platforms. It’s deeply integrated into Xcode, saving you time by keeping you focused in one place. It leverages Apple’s cloud infrastructure to offload your builds, tests, and even code signing for distribution. It integrates with Apple services like Test Flight and App Store Connect, as well as every major git-based source control provider. It even has REST APIs to help connect to other aspects of your development process. And it’s built with advanced security to protect you and your projects. This is the biggest investment we’ve made in our developer tools since the original release of Xcode, and all of it is inside the experience you already know. You create and manage Xcode Cloud workflows in Xcode 13, letting you stay in your code while test suites, code signing, and TestFlight distribution are handled for you. And when Xcode Cloud finishes a build, your results are right inside Xcode. This is going to change the way you work. It’s already changing the way we work. Many teams at Apple have incorporated Xcode Cloud into their development process, including the team behind Xcode itself.
It's incredibly easy to get started with Xcode Cloud. It only takes four steps: selecting the product, confirming your workflow, granting access to your source code, and linking with App Store Connect. Let's walk through the process with Fruta, a SwiftUI sample app. Xcode Cloud automatically detects the products and platform for my project, so I'll click next. Then I'll review the suggested workflow, which tells Xcode Cloud what to do and when to do it. The default actions build every change I make, which is exactly what I want. Now, Xcode Cloud will securely connect to the hosted account for my source code. I've already granted access using my credentials, so I can just move on. Finally, Xcode Cloud recognizes that Fruta already exists on the App Store and asks me to confirm the information. If your app isn't registered yet, Xcode Cloud will do it for you. I'll click Complete and start my first build in the Cloud. When the build is finished, I'll be able to view the results in the report navigator. And just like that, I set up continuous integration and delivery for my app in one minute all from within Xcode. Let's take a closer look at how results are presented. Under the Cloud tab in the report navigator, builds that have been run for each workflow will be grouped by branch or Pull Request. Selecting an individual build brings up its overview with information like how and when it was started, which Xcode and macOS versions were used, and the status of all the actions. You can even check out the source or initiate a rebuild. While the default workflow is great for getting started, Xcode Cloud gives you even more power to accomplish goals like analyzing an app or deploying new releases to Test Flight or the App Store. My team wants to run our iOS tests on every new Pull Request, so let's set that workflow up now. I'll go back to the Xcode Cloud product menu, selecting Manage Workflows this time, and I'll click Plus to add a new one. I'll name the workflow Pull Requests, then edit its start conditions to run on every Pull Request that targets the main branch. I want our tests to run on the public beta versions of Xcode and macOS, so I'll set that here. Next, I'll look at the workflow's actions, where I'll add a test action, then select an existing iOS test plan from the project. To get broad testing coverage for my app, Xcode Cloud recommends simulators for me to use. With just two clicks, I get a curated set of iPhones and iPads for my workflow. Now that's pretty neat. Our team also needs to be notified when a build succeeds or fails. So I'll add a Notify post action and add our team's Slack channel. By clicking Save, our workflow has been added to our product on Xcode Cloud. Now, my team will have added confidence in the changes we're making. There's so much more you can do with workflows, including running custom build scripts, and using Xcode Cloud’s web hooks and APIs to integrate with other systems you and your team depend on. And the workflow management and build reports you just saw in Xcode are also available in App Store Connect on the web. This makes it easy for you or other members of your team to use Xcode Cloud from anywhere. Now that you’ve seen the basics of working with Xcode Cloud, let’s see how it helps you in each specific area of your development cycle. Writing good tests and running them repeatedly is critical to creating a quality application. With Xcode Cloud, you’ll be testing your code more thoroughly, more consistently, and more efficiently. You can configure your Xcode Cloud workflows to run multiple test plans across multiple platforms, device simulators, and OS versions all in parallel. You can also run your tests in Xcode Cloud on beta OS releases before you even download the betas to your own machine. So Xcode Cloud will help you test more and Xcode 13 will help you test better.
Our app Fruta supports the Light and Dark appearance, portrait and landscape orientations, and localizes to two languages. I've been working on a suite of user experience tests that exercise Fruta's most popular features. Here in my test code, I'll adopt this simple XC test API to make these tests go even further by automatically running each test in each variation. Let's look at that test coverage in Xcode Cloud by selecting the most recent build and the workflow test action I have configured. Results are displayed in the familiar Xcode test report. These tests ran across a set of recommended iPad simulators running iOS 15, once per configuration, capturing screenshots all along the way. Xcode 13 has a brilliant new way to review those results. From the editor options menu, I'll enable the new gallery view. The screenshots from my tests are displayed in every variation and images from each test presented together. I can zoom out even further to see all images, and when I find one I'm really interested in, I can see it at full resolution using quick look. The gallery view makes it effortless to confirm our app is looking fantastic across all conditions, languages, and layouts. Over in my unit tests, I have a test failure that I'm sure you will relate to. Sometimes the test passes and sometimes it fails. We've all been here before. And Xcode 13 is here to help. I navigate to the test source, click on the test gem, and choose Run Test Repeatedly. Let's get a better sense of reliability by running it 100 times. If I wanted to do this before, I'd have to run the test many, many times myself. Now, I can sit back as the tools do all of the work. As I suspected, Xcode is showing this test is very unreliable. There must be a problem in my code. But until I can fix it, I'll adopt the new Expected Fail API and include a message about reliability for the rest of my team to see. To make sure things are just as I expect, I'll make use of the Test Again feature, available from the product menu. Xcode remembers what it did last time so it's really easy. My test is still raising assertions, but it's not failing anymore. And I have a gentle reminder to fix it down the road. That is exactly what I need. As you can see, Xcode 13 and Xcode Cloud help you find and address issues in your app, or your tests, faster than ever. Tests are one form of insight on your code. Another is input from your peers through code reviews and Pull Requests. To keep you focused on your code, Xcode 13 brings these discussions with your team directly into the editor. I've created a Pull Request from the feature branch I've been working on. My feature allows users to favorite the most delicious smoothies. In the navigator on the left, you can see the new source control changes tab. It shows all the files I've modified locally, my Pull Request, and the changes that are included. When I select the Pull Request, I get a full overview of all the activity and the conversation going on. And as I scroll, I see my description and the interesting events and time, as well as any code feedback from my team and new commits that I make. But we're using Xcode Cloud, and our Pull Request workflow is building and testing every commit that I make. At the top, I get a live status from all of my workflows.
Rhonda has a few suggestions to make my code even better. When I navigate to the source code, I see comments from Rhonda in my editor. This code requires the user to be logged in. So it needs to be reworked to allow signing in before favoriting a recipe. I'll reply to let Rhonda know that I'm working on a change and also give her a heads-up: This might crash in the build if she doesn't sign in first.
Beyond Pull Requests, Xcode 13 makes reviewing local changes really easy too. As I navigate to a file that I'm working on, Xcode automatically displays the diff of my changes against the latest revision in this beautiful new in-line presentation.
I can use the updated revision selectors to compare my local changes against any branch or tag in history. Best of all, I can use code review in any editor, even when I'm using multiple editor splits across different files in my window. And now with two options-- in-line comparison and side by side-- I get to pick the perfect presentation for what I'm working on. With one last code change and comment, it could not be easier to review and respond to insight from my team.
Part of delivering a great experience is getting your app into the hands of your team and beta testers. Xcode Cloud makes that process effortless. Xcode 13 now uses the cloud to securely obtain and manage everything you need to code sign your apps. This means you no longer need to worry about keeping your certificates and profiles up to date on your Mac. The archive action in your Xcode Cloud workflow uses the same system to sign your app for distribution. And by adding a postaction to your Xcode Cloud workflow, you get automatic delivery of betas through Test Flight to all Apple platforms, including macOS with the new Test Flight for Mac. Once you've delivered your latest build, you'll get even more insight from your beta testers. Xcode 13 includes major improvements to better connect you with the same diagnostics and feedback found in App Store Connect. Crash logs from Test Flight apps are now delivered directly to the Organizer within minutes. And the Organizer now shows the written feedback a user attaches to a crash report. This gives you valuable context when analyzing the crash and a broader view into your app's usage. After Andrew's test passed, Xcode Cloud submitted the build directly to Test Flight.
I just received a notification on my phone for a new iOS build of Fruta, and there's the Mac version from Test Flight for Mac.
Since Fruta is a multiplatform project, I'm getting this new build in both places at the same time. I'm really excited to install this build and see how the new feature feels. So I'll do that now. I suspect there are still a few rough edges. Maybe the app will crash if I try to favorite this smoothie. And sure enough, it does. I can use Test Flight's crash feedback UI to let Andrew know. I'll explain what I was doing when the app crashed, and he'll be able to fix the issue and add a test to ensure this is caught earlier next time.
I've been looking at our most recent app releases in the Organizer When I filter to the last day, here's a crash Rhonda experienced just a moment ago, fully symbolicated and ready for investigation. The new Test Flight feedback inspector includes her comments, information on the app build, version, and her device. And I can even contact her to learn more about her experience. What's even better: Xcode knows where in my code this crash came from. So with one click, I can open it in my project. The debug navigator has the full back trace. My source editor highlights the assertion and my Pull Request conversation displays too. It's incredibly exciting to have what I need to fix this problem right here in Xcode.
We've brought everything you need into the tools you use every day-- test results, comments from peers, and user feedback-- all to give you greater insight and help you deliver the next great version of your app. Xcode Cloud was built with your privacy and security in mind. Your data-- including your source, access tokens, sign-in keys, and build artifacts-- are handled securely. And we use the least amount of data possible to run the service. This is a huge year for our developer tools. With Xcode 13 and Xcode Cloud, you'll be building and delivering quality apps across all of Apple's platforms in less time and with less effort than ever. Xcode Cloud will initially be available as a free, limited beta. Developer Program account holders can sign up right now at developer.apple.com. We will gradually add more teams as we work towards making this available to all developers next year. We'll provide more details on pricing and availability this fall. You can check your registration status from inside Xcode 13 or the Xcode Cloud tab in App Store Connect.
In addition to everything you've seen here, we have a huge list of improvements and features in our developer tools that you can learn about in this year's sessions, including some terrific enhancements to Swift support in Xcode.
That's just the start of an exciting story for Swift this year. To tell you more, here are Josh, Holly, and Matt.
Swift has become a critical language for developers across Apple's platforms and beyond. It's enabled our most modern technologies, serving as the foundation for a new generation of frameworks like SwiftUI, CreateML, and the new StoreKit 2. It provides a modern, type-safe language to craft your most complex apps with powerful tools like Xcode Previews and Swift Package Manager to accelerate your development. And it's friendly and approachable for newcomers, with engaging content and lessons available in Swift Playgrounds to learn how to code. Now, a key part of ensuring that a technology is great for you is adopting it ourselves. High-profile apps like Music have been written in Swift for years now, and system-wide features like Widgets have been designed from the ground up with SwiftUI. Learning Swift and SwiftUI gives you a common, powerful set of tools and APIs to build fully native apps for all of our platforms. And because Swift itself is open source, we've been able to work together with many of you to deliver tons of new features and capabilities over the last few years. Now, one of those capabilities, which is crucial to building any app, is support for concurrency. And here's Holly to tell you all about it.
Whether you think about it or not, you are writing concurrent code today. Concurrency enables your apps to perform multiple tasks at the same time, which helps your apps stay responsive to user input while doing work in the background, like a weather app fetching forecast data while the user selects a city. And it's essential to taking advantage of multicore processors to achieve high performance for heavy computation, like rendering complex visual effects in a video app. But without language support, writing concurrent code is really hard to get right. So we're bringing first-class support for concurrency to Swift. Our approach to building concurrency into the language follows the same core principles of Swift itself, making it easier to write modern, safe, and fast code that eliminates entire classes of programming mistakes. First, let's talk about how we've taken a modern approach to building concurrency into Swift. Today, we think of modern code as structured and easy to express what you want to do. Unfortunately, most of today's asynchronous code uses completion handlers that are unstructured and hard to express. To make expressing asynchronous functions easier, we've built the modern async/await pattern into Swift. Now you can mark an asynchronous function with the async keyword.
When the function is called, you use the await keyword to indicate that other work can be done while the caller waits for the result of the async function. To understand the improvements async/await brings over completion handlers, let's walk through an example. When I'm not working on the Swift compiler, I like to dance. To prepare for a show, a dance company must first warm up, the crew will fetch the scenery and props from storage, and then the stage is set. Once all of that is done, the dancers can move into their opening positions. Here is an asynchronous implementation of "prepareForShow" that uses completion handlers. What this code is trying to accomplish is really simple, but the code is convoluted. It uses nested completion handlers that make the flow of execution unnatural, so the code is really hard to read. Adopting async/await in this example leaves us with code that's now in a straight line. This code is so much easier to understand. The control flow goes from top to bottom, like any other function. You handle errors and return values in the same way as you're used to in Swift. You can use all of the normal control flow constructs, too. So it's easy to add conditional logic, so the function behaves differently during a rehearsal. Async/await makes writing asynchronous code easier by leveraging the tools you already know. It's also easier to introduce concurrency where you need it using Structured Concurrency. Structured Concurrency is a way of organizing concurrent tasks to make them easier to reason about. Let's introduce concurrency into prepareForShow. Right now, the function waits until the dancers finish their warm-up before starting to fetch the scenery, but these tasks could be done in parallel. With Structured Concurrency, you can easily create concurrent child tasks using async/await with local variables, like this. Now, the code uses 'async let' variables to create child tasks that execute concurrently with the parent. So, the company warm up and fetching the scenery will run concurrently with the rest of prepareForShow. When we need the results of those child tasks, we await the results. Because fetchStageScenery executes concurrently, it's possible that the result isn't ready yet when prepareForShow needs to use it, so accessing the result must be done asynchronously.
Swift's concurrency model is also designed to be safe. Just like Swift eliminates null pointer mistakes with optionals, the compiler will now help eliminate common concurrency issues by ensuring that access to shared state is safely coordinated between concurrent tasks. A core part of this safe concurrency model is built around actors. Actors are an industry-proven model for safe concurrent programming, and a powerful synchronization primitive. Conceptually, an actor is an object that protects its own state by only providing mutually exclusive access. This completely eliminates concurrent access and the low-level data races that come with it. This concept might sound familiar, because it's similar to a pattern that you might already use for classes with a dispatch queue, which was itself inspired by actors. In this pattern, the instance properties in a class are carefully accessed using a serial dispatch queue to maintain mutual exclusion. But this pattern is prone to mistakes. There's a lot of boilerplate, and it's too easy to forget to manually use the queue just once and introduce a race condition into your code. To solve these issues, we went back to the core idea of actors, and built it into Swift as a first-class construct. Now, you can declare an actor type in Swift with a simple keyword. It has the same structure as the constructs you already know, and there's no need for manual synchronization. With actors built into the Swift language, synchronizing access to actor state can be managed for you automatically. An actor can access its own properties directly, and interacting with an actor externally uses async/await to guarantee mutual exclusion. The actor concept is so powerful that it also solves another common source of concurrency problems, which is proper use of the main thread for things like UI operations. Today, you have to manually dispatch to the main queue each time you call an API that must be run on the main thread. Now, we're introducing a way to state that an API is always run on the main thread using the main actor. Making sure that an API always runs on the main actor is as easy as annotating the declaration with the MainActor attribute. Just like with other actors, calling a function that runs on the main actor is just an await away. Altogether, this means it's easier write safe concurrent code that you don't have to manage yourself. As we build support for concurrency directly into the language, it gives us the opportunity to better optimize the performance of your concurrent code. With async/await, the compiler understands the concurrency of your code, which allows for more effective optimizations. This includes reducing reference counts and inlining as well as addressing concurrency-specific performance issues like excessive context switches. And of course, your concurrent code will get even faster as the compiler gets smarter in the years ahead. There are tons of asynchronous APIs in the SDK that you already use in your apps. We've refined the SDK to enable async/await with these asynchronous APIs, so you can immediately adopt async/await in your existing code. And we didn't stop there. We've added new purposely crafted APIs that take advantage of async/await for when you work with URLs, when you're doing asynchronous I/O, and we even added support for asynchronously iterating line-by-line through a file. Async/await makes it natural to express asynchronous code, structured concurrency makes concurrent code easier to reason about, and actors help you safely model shared state in a concurrent program. The Swift concurrency model brings together these fundamental pieces to make you more efficient, give you more power, and allow you to have more fun building concurrent apps. Of course, the language is just one piece of the puzzle. The frameworks built with Swift are just as important. Now, back to Josh. Swift is the foundation for the next generation of APIs. With new features like Concurrency, we're evolving the language and frameworks together, so you'll see immediate benefits across the SDK, including with key technologies like SwiftUI. Two years ago, we began to reinvent UI development on our platforms. We started small, with a core API that allowed you to adopt SwiftUI incrementally in your existing applications. Last year, we added API to describe your app's life cycle, enabling you to develop apps entirely in SwiftUI from your first line of code. And this year, SwiftUI is taking another huge step forward, helping you deliver great experiences to all your users across all Apple platforms. We focused on APIs that we know are critical to your apps, because we also needed them to build ours. And your feedback helped us enhance the most important APIs, while also refining the development experience. This year, we've started using SwiftUI in apps like Maps, Photos, and Shortcuts. And we've rebuilt iOS apps like Weather, system interfaces like the Apple Pay payment sheet, and brand new watchOS apps like Find My, entirely with SwiftUI. To see just a few of the enhancements that make this possible, let's take a look at some ways that we can improve Fruta. We'll start with List, the most ubiquitous component across all our platforms. We can now easily add a swipe action to mark a smoothie as a favorite. Adding pull-to-refresh is just one more line. And Swift now makes it easy to limit a modifier to a single platform-- in this case, iOS. Adding a Search field is just one more line. Now, we could stop there, but let's add some search suggestions as well, which will be shown while we're typing. And let's test it right here in Xcode. We have a swipe action now, pull-to-refresh, and full search support including suggestions, all in just a few lines of code. Next, let's refine Fruta's accessibility support. First, a new modifier which adds accessibility rotors can make our app faster to navigate with VoiceOver. And second we'll improve the accessibility of this custom stepper control. Custom controls are often a source of poor accessibility, but we can now simply inherit the full accessibility implementation from the standard Stepper. Most SwiftUI APIs are available across all platforms, but we're moving platforms forward individually where appropriate as well. Let's add a multi-column table to our macOS app. I already added a new file for this, so we'll just add the new Table component here. And then within it, we'll just add three columns of data. Now let's run the macOS version of our app. We'll find a search field placed right where you expect it in the toolbar, and suggestions appear just below it while we type. We can switch to our new multi-column table that we added, and we'll see it's displaying the search results as well. And of course, we can clear the search to get them all back. Now let's switch to recipes and turn on VoiceOver. With VoiceOver, we can easily access the rotor that we added to quickly choose a smoothie from the list. And VoiceOver interaction with our custom stepper now behaves exactly like a standard stepper, making it easy to use for all of our users. We're building our apps using these new capabilities, so we know that you'll find them helpful in yours as well. And we've just scratched the surface of what's new. For example, you're going to love SwiftUI's new material support. In the Fruta app, views like this are made more interesting by adding a background image, and they're kept legible by applying one of the new material styles behind the content. Content responds dynamically to this background, so instead of the gray normally used for secondary content in an opaque context, SwiftUI will automatically apply vibrant rendering to text, symbols, and even standard UI like separators. So with just one line of code, you can get great-looking results like this, automatically. And there's so much more. With all of these improvements, SwiftUI is the best way to build great experiences for all your users across all our platforms. And this year, we're bringing app development with SwiftUI to iPad in Swift Playgrounds. It's so much fun, and Matt will show you all about how it works.
You know Swift Playgrounds provides a great way to learn how to code, and it's been used by millions of people to expand their knowledge of Swift. And beyond being a great way to learn, we know a lot of you already use Swift Playgrounds to experiment, sketching out new ideas and playing with the latest features in the iPadOS SDK. This year, Swift Playgrounds 4 is taking a huge step forward by allowing you to build apps, and even submit them to the App Store right from your iPad. With the ability to create apps on iPad, you can be more productive in Swift Playgrounds than ever before, allowing you to work on your ideas wherever you go, on whichever device you prefer. And with a new package-based project format, you can seamlessly bring your work between Swift Playgrounds and Xcode. Let's dive in and take a look. This is Swift Playgrounds 4. It's got all of the great Learn to Code content that's helped inspire new developers around the world, and now, you can create projects that let you build SwiftUI apps. Let's make a new one now and see what we can build. I'll open the new project I created. In an app project in Swift Playgrounds, my code is on the left, and the result of my work is on the right, just like I'm used to. What's new is deeply integrated support for SwiftUI, with live interactive previews powered by the same technology used in Xcode. My new project template comes with a Hello World placeholder, which I can easily replace with a text view of my own. I'll start typing Text and right away, I get helpful suggestions from code completion, which, new in this release, appears right below my insertion point. I'll accept the completion and write my own hello message. While I'm typing, my app updates live to show my changes with each keystroke. Now, let's have a little fun. I'm going to replace this static text with a button. I'll select my text view, and then add a button from the library. Here in the library, I can browse and search through assets in my project, as well as the SwiftUI views, modifiers, colors, and SF Symbols provided by iPadOS. For now, I'll just add my button.
I'm going to fill the action in with a simple print statement.
For the body, I'll use a Label with a system image. The text will be "Say Hello." And the image will be the SF Symbol for Swift. I've now got an interactive button in my app. When I tap it, the print message I wrote appears as a message bubble at the bottom of my screen. If I open the console, I can see a history of print statements that have been executed since I opened this project, and it updates in real-time as I interact with my app. Now, this button is purple because that's my app's accent color, which Swift Playgrounds chose for me when I created my project. If I open the document sidebar, I can access all of my app's top-level settings, like its name, accent color, and icon. As much as I do love purple, I think this smiley face will look big and bright in orange, so I'll change my accent color here, and both my app's icon and the tint color of the button I just made will update to reflect the change. This has been really fun, but Swift Playgrounds isn't just for experimentation. I've got another app that I've been working on for a while. I use this app to track the amount of time I spend on my favorite hobbies, and I think others might find it useful as well. I can get a feel for what the installed app would look like by taking it full-screen. Now I can explore my app in its full-width two-, or three-column layout. I can jump out of full screen and return to my code whenever I like. This feels great, and I think my hard work is ready to share with my friends and family with TestFlight. Anyone with a Developer Account can upload their apps from the App Settings area once they're ready for App Store Connect. When I tap the upload button, Swift Playgrounds builds, packages, and uploads my app. I can then hop over to the App Store Connect website, and make my app available via TestFlight, and when it's ready, submit it to the App Store, and share it with the world. And that's a quick look at Swift Playgrounds 4, with the ability to create apps using SwiftUI right on your iPad. Swift Playgrounds 4 will be available later this year. We know you're going to love having the freedom to develop your app ideas wherever you go, on whichever device you prefer. And now, I'll hand it back to Susan.
So much of the way we experience the world is through visual communication, and that's a big part of using Apple devices. Our technologies for graphics, displays, and augmented reality are front and center, whether you're glancing at the Always on Display on Apple Watch, enjoying ProMotion as you work with video tools on iPad Pro, playing a game on your iPhone, or creating immersive 3D content on your Mac. And now Myra and Eric are gonna take you through what's new this year starting with augmented reality.
AR is a powerful technology and thousands of you are already using it in your apps to transform how we all work, play, and express ourselves. With over a billion AR enabled iPhones and iPads around the world today, there's never been a better time to start adding AR experiences to your apps or building entirely new ones. Historically, building great AR apps has required deep knowledge of 3D modeling and a mastery of sophisticated rendering engines. However, we want all of you to be able to create amazing AR experiences. This is why we've released a suite of technologies to make it easy for you to get started with AR. One of these is RealityKit, our 3D rendering, audio, animation, and physics engine built from the ground up for AR. RealityKit makes rendering immersive AR experiences simple, featuring photorealistic rendering, and camera effects like noise and motion blur. RealityKit also takes advantage of our latest hardware like the LiDAR Scanner, which enables virtual objects to behave just like they were really there with people and object occlusion. And it's all written in Swift. Today, we're announcing RealityKit 2, a huge update that gives you more visual, audio, and animation control and tackles the most difficult part of making great AR apps-- creating 3D models. If you've ever created one before, you know a single model can take hours and thousands of dollars to make. Now, with Object Capture, you'll be able to make 3D models in minutes using your iPhone to capture 2D images of an object and the Object Capture API on Mac to turn these images into lifelike 3D models, optimized for AR. This process is so simple. You start by taking a series of pictures with your iPhone or iPad to capture all angles of the object, including the bottom, because we support flipping the object and automatic foreground segmentation. You can use apps like Qlone, which provide excellent guides to help streamline your workflow. Then, using the Object Capture API, it only takes a few lines of code to generate your 3D model. You start a new photogrammetry session in RealityKit that points to the folder of your captured images. Then, call the process function to generate the model at the desired level of detail. It's that easy! Object Capture enables you to generate USDZ files optimized for AR Quick Look, so users can view them in Messages, Mail, Safari, and other apps. You can also generate USD or OBJ asset bundles from the Object Capture API that can be used for ray-tracing and other post-production workflows. Turning real world objects into 3D models has never been easier. You can get started using Object Capture today with our sample code, and we're working with some of the leading 3D content creation tools to bring this workflow into many of the pro apps you already use like Unity Mars, Cinema 4D, and Qlone, available later this year. It's easy to bring Object Capture models into Xcode and use the new RealityKit APIs to add effects. My team and I tested Object Capture by scanning our favorite food, and we built an AR App Clip to share our recipes, which include the AR preview of the dish. The chocolate croissant we captured using Qlone is actually a virtual replica of a croissant someone on my team baked, and I want to add it as another recipe to our App Clip.
I'll start by dragging the 3D model of my croissant into my ARApp project. Next, I'll anchor it to my App Clip Code using ARKit and initialize a ModelEntity for the asset. I can always fully examine the 3D model directly in Xcode Quicklook at any time while building my project, before deploying my app clip. We've used the new RealityKit APIs in our App Clip to add effects to each AR dish to make it more realistic. Because RealityKit is a native rendering engine, we can fit multiple AR scenes or recipes into the App Clip. Let's check it out.
When I scan the App Clip Code, it launches the App Clip and then anchors the chocolate croissant right on top. To make the croissant more realistic, we used the new RealityKit custom surface shader to add emissive light and pull back on the ambient occlusion. Let's take a look at a few more dishes from the team, like seared steak. Here we added onto the custom surface shader by creating a steam effect with the new Procedural Geometry API to layer in a flip-book shader. Because the steam is procedural, we can use the same effect on lots of recipes, like this pizza. Notice how the steam effect procedurally expanded with size of the pizza. For this barbecue chicken dish, we've added a full screen post processing fire effect to indicate this dish is spicy. And finally, we dropped the flames and instead used a new compute shader and geometry modifier to add some celebratory confetti around the birthday cake. As you can see, we've opened up RealityKit rendering to more customizations, and we can't wait to see your creativity in how you use these new APIs. These are just some of the exciting new improvements we have for AR that enable all developers to create 3D models to build more immersive and lifelike AR experiences. One of the foundational aspects of what we do in ARKit and RealityKit is our graphics technologies. And Eric will give us an update on what's new. A core idea of how we build products at Apple is that we bring together the most amazing hardware and software, and our approach to graphics reflects that ideal. For years, we've delivered powerful Apple-designed GPUs for iPhone and iPad, paired with our Metal graphics and compute APIs to help you get the most out of our products. And now, with the M1 chip, not only are we are delivering an unprecedented level of graphics performance and power efficiency in our latest Macs and iPad Pro, but we have created a unified Apple graphics platform with a common architecture based on Metal, the Apple GPU, and unified memory, that spans from iPhone, to iPad, to Mac. And this platform enables a fundamental shift. Graphics workloads that previously required high-end workstations or discrete GPU gaming computers, are now possible across our most popular products. For instance, the console-level performance of this unified platform has enabled developers like Larian to bring their AAA game, Divinity Original Sin 2, to Mac and now to iPad. And Deep Silver is using M1 and Metal's modern shader pipeline to enable the high performance, immersive graphics in their survival game Metro Exodus for Mac. But this graphics platform is not just for games. Metal compute APIs are now accelerating the next generation of professional GPU renderers, like the all-new Octane X from OTOY and Maxon's Redshift renderer in Cinema 4D, now running Metal-accelerated on the Mac for the very first time. So to help you bring your graphics apps and games across all of Apple's powerful devices, we focused on two big areas this year: advanced graphics and gaming features, and powerful graphics developer tools. First, we focused on three key features essential to modern high end games and GPU rendering algorithms In order to accelerate complex mathematical operations, model the behavior of light, and represent realistic surfaces, modern GPU renderers need to interleave Metal graphics and compute commands in the same pipeline, which is why Metal can now call dynamic libraries, and Ray Query primitives, directly from your graphics shaders. And you can create even more photo-realistic rendering with the new Stochastic Motion Blur function in the Metal Ray Tracing API. For games to achieve higher frame rates with lower latency and less judder, developers need more control over the display. To accomplish this and take advantage of the awesome graphics performance of the latest iPad Pro, your game can use the Metal presentation time API and the ProMotion display to dynamically adapt your app's frame rate based on your desired latency between rendering and input. And macOS Monterey adds support for Adaptive Sync Displays. This means you can now take advantage of these ultra low-latency and variable refresh rate displays for your Mac games as well. Now, high-end games with advanced graphics are often designed around using game controllers as input. And adding game controller support is a powerful and easy way to use a common input model to bring your games to our unified graphics platform. Our Game Controller framework now supports the most popular controllers, Including support for the latest Xbox Series X Wireless Controller and the PlayStation 5 DualSense Controller, complete with haptics support. To make it even easier for you to bring your controller-based games to iPhone, and iPad, we've added a new API so that you can enable an on-screen virtual Game Controller, with just a few lines of code. and game controller support is more valuable than ever, because in macOS Monterey and iPadOS 15, players can find the games their friends are playing, directly navigate to the app library to launch a game, and then hit the "Share" button to record their favorite game highlights, all without the controller ever leaving their hands. Now, along with these new advanced APIs and features, Xcode 13 adds powerful new graphics developer tools for you to optimize and debug your GPU code, each designed to bring your modern high-end games and graphics applications to the next level. First, when building advanced GPU renderers and games, GPU shaders can get really big. Debugging 10,000 lines of shader code across thousands of workgroups all running in parallel can take a really long time. To help you streamline this process, Xcode 13 adds Selective Shader Debugging. Here, we're using Selective Shader Debugging to choose exactly which functions to debug, within a much larger GPU shader. This can dramatically reduce the time it takes you to iterate and debug your largest shaders, which lets you develop faster, and focus on adding features and performance to your GPU code. Next, high end AAA games also require the latest in modern texture compression support, which is why we've updated our powerful Metal Texture Converter Tool to give you direct control over the Texture Converter compression pipeline, added all-new gamma-aware pixel transforms, and vastly expanded support for the latest ASTC and BC texture compression formats used by Mac, PC, and iOS games. This makes it even easier to optimize your game's texture assets for each of Apple's devices. Finally, to help you achieve peak performance with your most advanced rendering, Xcode 13 adds an all-new GPU Timeline view in the Metal Debugger. This powerful new view allows you to combine the best of visually debugging your Metal commands, resources, and buffers, on the timeline of events, in addition to powerful performance counters and bottleneck analysis information. With Apple CPUs, GPUs, and Metal, we have created a unified graphics platform with over a billion devices, with the latest features and developers tools to enable you to unleash all-new levels of capability and performance for your graphics, pro apps, and games. And now back to you, Susan.
Your apps can help connect people with ideas, services, tools, and most importantly, other people. Finding balance is just as important as connecting, so this year we're enabling you to help users focus on your app at the right moments, to manage devices of loved ones while respecting their privacy, and to make your app's content the center of new, shared, intimate experiences built across Apple platforms. We created a powerful new set of APIs that'll help your app create those kinds of relationships. Let's start with Heena and Matt to tell us about Focus. iOS 15 introduces a powerful new set of tools to help people focus. These tools help reduce distractions, so that people can be in the moment. And it starts with an entirely new approach to notifications. Here are some notifications that have piled up on my Lock screen. Their levels of urgency are clearly different. But they all behaved identically. They had the same look, the same haptic, the same apparent level of importance. Now, with the new Interruption Level API, there are more nuanced ways for apps to convey different levels of urgency. Notifications can be assigned one of four interruption levels. Passive interruptions are silent and don't wake the device. People will see them the next time they pick up their phones. You might want to use these for notifications that aren't time-sensitive. Active interruptions will play a sound or haptic just like notifications today. Time Sensitive interruptions are designed to visually stand out and hang out on the Lock screen a little longer if the user hasn't tapped on it. They'll also be announced by Siri if someone is wearing AirPods. And you'll want to use this for notifications that require immediate attention. Critical alerts are the most urgent category. They'll play a sound even if the device is muted. These are reserved for only very serious health and safety concerns, and require an approved entitlement. There's another category of notifications that deserves special attention: communications from people. If you have a communication app, it's important that you tell the system about your message and call notifications. The system will then use this information to tune your notifications' appearance and behaviors, which will help people better interpret them. Once implemented, your notifications will go from the standard appearance to looking like this, featuring a prominent avatar with your app icon superimposed, and the same avatars will be used elsewhere in the system, like in the Share Sheet. I'm so excited to see those avatars! All right, so notifications are a really effective way to get people's attention. But they can also be kind of ephemeral. If they're not well-timed, people can easily miss them. To help users engage on notifications on their own time, we're introducing the Notification Summary, which delivers notifications as a helpful bundle at times the user chooses, so they can quickly catch up when it's best for them. The summary bundles Passive and Active notifications from user selected apps and presents them in a beautiful layout. It then sticks around on the Lock screen for a while until it's seen. The summary is also personalized for each user. As you can see, there are two marquee slots at the top. What's featured there is based on a few factors: First, to provide variety, those two apps are sampled from inside the summary. From there, we do some additional weighting. A notification with a large thumbnail will always be chosen over one without. And the notification with the highest relevance score-- which is something that you determine-- will be chosen over others from the same app. Okay, so you might be wondering, "How does my app end up in the summary at all?” Well, first, it's completely up to the user if they want to use the notification summary. And if they do, apps that send the most notifications will be suggested. Users can then customize which apps go in the summary along with the times they'll receive them. If your app is placed in the scheduled summary, there's still a way for you to reach the user in real time. That's where Time Sensitive notifications come in. Notifications that use this interruption level will be delivered immediately. Remember, you should only mark notifications as Time Sensitive if they require the immediate attention and are relevant in the moment. No feature eliminates distractions more than Do Not Disturb. But Do Not Disturb silences all notifications and we wanted to give users more flexibility. With Focus, users can choose the apps and people that they need to receive notifications from based on what they're currently doing. They can carve out their day for work or create a Focus for an activity like gaming, reading, or fitness. While in a Focus, users can share their status with others, so they know not to interrupt. But if it's truly urgent, a message can break through and notify anyway. Your communication app can also request access to the user's Focus status. If granted, the system will inform your app when it changes, so your app can keep its status in sync with the rest of the system. Your app can even provide users the ability to break through for urgent communications. We're providing users with more control and flexibility than ever to manage their notifications. And to help make sure these tools are working for them, the system will periodically check in to see if a specific adjustment to their settings might be helpful. It's based off of how users interact with your app and your notifications. So if a user is typically using an app while in a Focus, then the system might suggest allowing that app's notifications during that Focus. Or if a user is interacting with an app's Time Sensitive notifications, then the system might suggest reverting them back to active notifications. The same goes for when an app sends one notification after another and the user isn't engaging. The system might suggest muting all notifications from that app or maybe just a single conversation for a limited amount of time. So to make the most of these new features, there are a few key things that you need to do. You can help make sure the right content is featured in the marquee slots at the top of the summary by setting a relevance score on your notifications and attaching the appropriate thumbnails. You should think carefully about which interruption levels make sense for your notifications. If you have a communication app, you should adopt the new User Notifications API to tell the system about your message and call notifications. You should also reflect the user's Focus in your app by using the new Focus Status API. We think these tools, with your help, will go a long way in helping users reduce distractions. Next, Martin is gonna tell us about the new Screen Time API. Thanks, Matt. Okay, let's switch gears to talk about Screen Time and parental controls. We recognize that parents need modern, innovative solutions to help their children build healthy digital lives, and they also deeply value their family's privacy. And we've seen an appetite from many of you to deliver on these user needs. So today, we are releasing Screen Time API, a set of tailor made parental control frameworks that build upon our deep commitment to privacy. We had three key goals in mind with the Screen Time API. To offer you modern solutions for developing parental control apps. To empower you to build dynamic experiences and innovate beyond what even Screen Time offers today. And to protect user privacy. To that end, we've added three new Swift frameworks to the iOS SDK that enable you to innovate in the world of parental controls: Managed Settings, Family Controls, and Device Activity. First, let's talk about Managed Settings. Fundamentally, your parental control app needs a way to restrict what a child can do across their devices, and ensure that those restrictions remain in place until the parent says otherwise. With Managed Settings, your app can set a number of restrictions like locking accounts in place, preventing a password change, filtering web traffic, and limiting access to applications, much like Screen Time. customized with your app's branding and functionality. By leveraging this framework, your app will be able to manage all of these restrictions.
Beyond restrictions you'll be able to limit access to apps and websites when appropriate and provide a set of actions unique to your use cases. And finally, we lock the app in place so it can only be removed with the parent's explicit approval.
Now, the Family Controls framework is at the heart of our privacy model and it serves two key user-facing experiences. First, it allows parents to authorize your app for management with their iCloud credentials, ensuring that the device is for a child in that family. And it provides a personalized experience via a system App & Website picker, which allows parents to choose which apps and sites should be restricted, all while protecting user privacy. We wanted to allow parents to manage and restrict the apps and websites that their children use, but do so in a way that doesn't divulge their private application and web browsing details. So, rather than returning a selection of raw bundle IDs and URLs, the picker will return opaque tokens instead. These tokens allow your app to keep track of which apps and websites a parent wants to manage, all while ensuring that parents are the only people who can access this highly sensitive information. And these tokens enable functionality across all of these frameworks. Use a token to limit access to a specific app or website with Managed Settings, or gain insight into app and website activity, something that wasn't possible on iOS until today, with the Device Activity framework. With the tokens provided by the Activity Picker in Family Controls, you're ready to leverage the power of Device Activity. You can register unique time windows for different apps and activities, each emitting a warning like, "Five more minutes left," and a completion event. Once your app receives these events, it can react accordingly by changing restrictions, limiting access to relevant apps and websites or... encouraging children to do their homework. Whatever experience you are trying to deliver for your users. This concept of seeing device activity, not just browser activity, but across all apps on the device is totally new and is a unique opportunity for you to innovate in the world of parental controls. With the Screen Time API, you could enable family-wide downtime, or even create incentives to do something fun after something educational, like unlocking some gaming after doing some homework. We're super excited to see how you will build on these APIs to help parents and families manage the way that they use our devices. And now over to Vi to tell us what's new with Widgets. Last year, we introduced Widgets on the Home screen. and people loved them. Widgets provide deep personalization with delightful and timely views of the most relevant content from your app. They're all about glanceability. People love how widgets present the most useful information from your app, in a single glance, at exactly the right time. A tap can deep link just to the right part of your app. Over the past year, you've created some amazing widget experiences that have truly inspired us. The best widgets are focused, dynamic, and provide unique views of the app throughout the day. Like this one, from Day One. That's me and my kids on a trip to Santa Cruz. Surfacing just the right piece of content in the right context helps your users discover the magic of your apps. And we've seen that widgets encourage people to use your app even more. This year, we are taking the next step in making your apps more useful and more discoverable with widgets. And it starts with letting people place widgets among your apps on the iPad Home screen. To take advantage of the large screen, we're introducing a new extra-large size for widgets. This means a whole new set of opportunities for entirely new types of widgets that work best on iPad. To make getting into widgets even easier, we're adding new default Home screen layouts with widgets on iPhone and iPad. These include widgets from apps people use the most, arranged in Smart Stacks. Stacks let you save space by placing multiple widgets on top of each other. Smart Stacks use on-device intelligence to show the widget that's most relevant right now. Building on the foundation of last year's TimelineRelevance API, we are going beyond simply rotating the stack with on-device intelligence. Now we can give your widget more exposure by suggesting it even if it was not already in the stack. And how do we do this? Enter widget suggestions. How people interact with your app, as well as what you can tell us, helps us suggest your widget in a stack. Let us see how this works with our Fruta example app. If the user orders a green juice every morning, on-device intelligence will learn to suggest it. To opt in, you'll need to adopt the intents framework and donate an Interaction. That's it! Now your widget can be automatically suggested based on how people use your app. When you want to provide new information to users, you can also donate using the Intents API. For instance, the Fruta app can adopt this to offer a free birthday smoothie. Both past usage behavior, as well as new relevant intent donations can then help us suggest your widget in a stack at the right time. And if a user finds your widget useful, they can easily add it permanently with a long press. So that's our big update to widgets this year. More useful, and more discoverable than ever. Next up is some news on SharePlay. Over to Ryan and Juan. This year, we've all had to improvise to find new ways of connecting. And it has been striking to see many of you innovate, and build awesome new ways for people to feel a sense of togetherness while at a distance. And with people relying on FaceTime and iMessage more than ever to stay connected, it was only natural for us to build on those experiences, to help people feel more together when they're apart. Some of the most meaningful moments people have together are about more than just sharing a conversation they're about sharing experiences. So to foster that sense of closeness, we needed to build something completely new. And we had an ambitious goal. We wanted FaceTime to feel like a portal which transported people into the same space as some of their closest friends and family. So we built SharePlay. And we've given you the tools you need to create magical SharePlay experiences with the new GroupActivities framework. We bring the group, you bring the activities. And it all comes down to this concept of activities. When someone in a FaceTime call starts an activity, SharePlay will bring the group directly into your app, allowing for rich interactive experiences where users can communicate, just like they're used to. There are a lot of possibilities to explore with the Group Activities framework. And what better activity to do in your virtual living room than watching your favorite shows with some of your closest friends.
Hey, Juan, since your team just finished integrating SharePlay into the TV app, why don't you show us around? Sure thing! What do you want to watch? How about a little "Ted Lasso"? Sounds good. I press play, and the system asks me if I want to start shared playback, or play locally instead. This is where you come in. We're offering new APIs to start playback that are designed to fit right into your app's existing video experience. Now because I chose shared playback, the system is coordinating the video on my device and Ryan's at exactly the same time with Core Media and Group Activities doing the heavy lifting. That means when I hit pause, Juan's video pauses on the exact same moment. I can even jump to a favorite scene and everyone comes with me, as if we were all in the same room. All right? I mean, hey, Higgins and I are having lunch today. I love this scene! The magic behind this playback coordination means your media isn't retransmitted in any way. Everyone will get your full-fidelity video because it's playing in your app and streaming from your servers as it always does. Now, let's see how easy it is to adopt Group Activities in a simple media app and take full advantage of the framework. There are just a few steps to get your app ready for shared playback. First, we need to define our Group Activity. We'll create a new type that conforms to the Group Activity protocol, and supply a URL for everyone in the group to load. If your app already supports deep-links to content, you can use those here. We'll also provide some basic metadata to the system to customize system UI like confirmation dialogs and notices. Next, we need to hook up our play buttons. In our play() function, we'll create a new activity, and call .prepareForActivation() on it. This is when the system presents that confirmation dialogue you saw earlier. You can call this without any extra conditionals. It'll return immediately if the user is not on a FaceTime call. Now let's turn our attention to handling incoming activities. The initiator joins the session just like other participants, so the code looks the same for everyone. Here, I'm using Swift concurrency to create a new model with each session that's delivered. We'll then join the new session once the player appears.
You can observe the session for other state changes to update our UI accordingly. Finally, let's make sure we synchronize our players. Step 1: Grab your AVPlayer, and call .playbackCoordinator.coordinate WithSession passing in your session.
Step 2: There's no step 2. That's it! That's all you need to do to get frame-accurate AV sync with Group Activities and AVPlayer. The system handles the rest. Now we've talked a lot about shared media experiences. But we wanted Group Activities to provide a foundation that could power even the most ambitious experiences that you could dream up. So we started building on top of the fabric that powers Group FaceTime today, providing your app with a fast and reliable data channel. By taking on the job as group leader, our servers orchestrate a centralized state for the entire group. These servers don't see your users' data, because it's all end-to-end encrypted so that it stays private. And using this fast and secure data channel, you can create immersive experiences, from turning the page on a shared book, to seeing the strokes that someone has drawn on a shared whiteboard live. We want to really inspire you to take full advantage of our APIs and bring your users together like never before. But before we show you a demo, we're gonna need to call in a little extra help with this one.
Hey, everyone. Thanks for joining. We're gonna need your help with this one last demo to really demonstrate the power of Group Activities. All right, we have a whiteboard demo app to show you what can be done with Group Activities. Now, by opening a shared canvas, I'm starting a new activity with the group to draw together. Now we're all looking at the same canvas, and we can interact with each other in a whole new way. If I draw somewhere on the canvas, everyone can see what I'm drawing live. Now this app is using the same APIs that you saw previously, but instead of synchronizing media, it's using GroupSessionMessenger to send my strokes to everyone's device. And this isn't screen sharing. Because the app is running natively on everyone's iPad, I can draw on the canvas, too.
Let's all give it a try.
So we can gather around a shared canvas no matter how far apart we are. And it's bringing us together like never before.
Thanks for the help, everyone. The best part is, everything you just saw-- SharePlay activities, playback synchronization, and the fast, secure data channel-- you get all of these benefits just by integrating your app with GroupActivities framework. SharePlay is a great new way to elevate your app's content and help you create a more immersive experience for your users. We're eager to see the new shared experiences that you'll come up with using Group Activities. Now, over to Susan to wrap things up.
We believe the advances you've see today will help you continue to build apps that make a difference. We're building tools that streamline your workflow and make it easier to build great apps faster. We've made it easier to build immersive content, games, and tools for professional creators. We've shown you how your apps can help users connect while still focusing on the things that matter most. What you've seen today is just the start. There's so much more to check out this week that we haven't even had a chance to touch on. We know you're gonna build something awesome, and we can't wait to see it.
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.