What’s New in the iOS SDK
Learn about the key technologies and capabilities available in the iOS SDK, the toolkit used to build apps for iPhone, iPad, or iPod touch. For detailed information on API changes in the latest released versions, including each beta release, see the iOS Release Notes.
With the iOS 13 SDK, your app can take advantage of Dark Mode, Sign In with Apple, Core Data syncing with CloudKit, PencilKit, and more. You can build dynamic user interfaces faster with SwiftUI, write modern event processing code with Combine, and create a Mac version of your iPad app using UIKit.
With iOS 13, users can switch to Dark Mode to transform iOS to a darkened color scheme, putting the focus on your work while controls recede into the background. For information about incorporating Dark Mode into your apps, see Appearance Customization. For design guidance, see the Human Interface Guidelines.
SwiftUI is a modern approach to building user interfaces for iOS, macOS, watchOS, and tvOS. You can build dynamic interfaces faster than ever before, using declarative, composition-based programming. The framework provides views, controls, and layout structures for declaring your app’s user interface. It also provides event handlers for delivering taps, gestures, and other types of input to your app, and tools to manage the flow of data from your app’s models down to the views and controls that users will see and interact with.
To get started, see Learn to Make Apps Using SwiftUI.
Multiple UI Instances
With iOS 13, the user can create and manage multiple instances of your app’s user interface simultaneously, and switch between them using the app switcher. On iPad, the user can also display multiple instances of your app side by side. Each instance of your UI displays different content, or displays content in a different way. For example, the Calendar app can display the appointments for a specific day and for an entire month side by side.
Symbol images give you a consistent set of icons to use in your app, and ensure that those icons adapt to different sizes and to app-specific content. Symbol images use the SVG format to implement vector-based shapes that scale without losing their sharpness. They also support many traits typically associated with text, such as weight and baseline alignment.
To find symbol images that you can include in your app, use the SF Symbols app or create your own symbol images. To learn more, see Configuring and Displaying Symbol Images in Your UI.
Bring Your iPad App to Mac
Xcode 11 gives you a head start in bringing your iPad app to Mac. Begin by selecting the "Mac" checkbox in the project settings of your iPad app. To learn more, see Creating a Mac Version of Your iPad App and Bring Your iPad App to Mac.
ARKit 3 brings the following new features:
- Motion Capture. This lets your app track movements of human skeletal features.
- People Occlusion. This allows people to walk in front of the virtual content that’s in the camera feed.
- iTrack multiple faces. Track up to 3 faces in the front facing camera for devices with the TrueDepth camera.
- Simultaneous front and back camera. Use both cameras to get face and world data at the same time.
- Collaborative sessions. Collaboratively map the environment and get into shared AR experiences faster.
- Visual coherence. Automatically add effects like camera motion blur and noise that make AR content even more realistic.
- AR Coaching UI. A 2D overlay UI to help guide users on getting started, detecting planes, and more.
- Automatic detection of image size and faster reference image loading.
- More robust 3D object detection and ability to detect 100 images.
- HDR quality environment textures.
To learn more about these features, see the ARKit framework documentation.
RealityKit is a new Swift framework to simulate and render 3D content for use in your augmented reality apps including the ability to add animation, physics, and spatial audio to your AR experience. RealityKit leverages information provided by ARKit to seamlessly integrate virtual objects into the real world. For more information, see the RealityKit framework documentation.
Sign in with Apple
Sign in with Apple gives you a fast, secure, and privacy-friendly way for people to set up an account and start using your services from your apps and websites. For more information, see Sign in with Apple.
Keep your app content up-to-date and perform long-running tasks while your app is in the background using the new BackgroundTasks framework. For more information, see the BackgroundTasks framework documentation.
Record a video using the front and back cameras simultaneously using AVCaptureMultiCamSession. Capture hair, skin, and teeth segmentation mattes in photos using AVSemanticSegmentationMatte. Opt-in to specify the desired photo quality prioritizing between speed and quality. And disable Geometric Distortion Correction on super-wide cameras in your ARKit-available apps.
To learn more about these features and the AVFoundation Capture subsystem, see Cameras and Media Capture.
Combine is a new framework that provides a declarative Swift API for processing values over time. These values can represent user interface events, network responses, scheduled events, and many other kinds of asynchronous data. With Combine, you declare publishers that expose values that can change, and subscribers that receive those values from the publishers. Combine makes your code easier to read and maintain, by centralizing your event-processing code and eliminating troublesome techniques like nested closures and convention-based callbacks.
For more information, see the Combine framework documentation.
The new Core Haptics framework that lets you compose and play haptic patterns to customize your app’s haptic feedback, extending default patterns provided by the system. To learn more, see the Core Haptics framework documentation.
Use the new Apple CryptoKit framework to perform common cryptographic operations securely and efficiently, such as:
- Computing and comparing cryptographically secure digests.
- Using public-key cryptography to create and evaluate digital signatures.
- Generating symmetric keys, and using them in other operations like message authentication and encryption.
For more information, see the Apple CryptoKit framework documentation.
MetricKit is a new framework that gives you on-device power and performance metrics about your app captured by the system, which you can use to improve the performance of your app. For more information, see the MetricKit framework documentation. To learn how to make performance improvements to your app using MetricKit, see Improving Your App’s Performance.
The new PencilKit framework makes it easy to incorporate hand-drawn content into your app quickly and easily. PencilKit provides a drawing environment for your iOS app that takes input from Apple Pencil, or the user’s finger, and turns it into high quality images you display in either iOS or macOS. The environment comes with tools for creating, erasing, and selecting lines.
For more information, see the PencilKit framework documentation.
Core ML 3
Core ML 3 now supports on-device model personalization, allowing you to update a model by retraining or fine-tuning it with user specific data privately from within your app. Core ML has also greatly expanded its support for dynamic neural networks with over 100 layer types.
With the addition of the new BackgroundTasks framework, you can now schedule longer running Core ML model updates and predictions in the background.
For more information, see the Core ML framework documentation.
Starting with iOS 13, you can use the Vision framework to:
- Perform saliency analysis on images.
- Detect humans and animals in images.
- Classify images for categorization and search.
- Analyzing image similarities with feature print.
- Perform text recognition on documents.
For more information, see the Vision framework documentation.
With the new VisionKit framework, your app can let users scan documents using the device’s camera, like those you capture in the Notes app. Combine this feature with Vision’s text recognition to extract text from scanned documents. To learn more about scanning documents, see the VisionKit framework documentation.
Metal gives the GPU even greater control of the graphics and compute pipeline, adds features that make it easier to perform advanced GPU processing, and simplifies the work you need to do to support different kinds of GPUs. New tools, including Metal support in Simulator, help you get started faster and understand whether your iOS app is using Metal correctly. For more information, see Metal.
Metal Performance Shaders provides new options for image processing, Machine Learning, and ray tracing, including GPU generation and dynamic updates of ray tracing acceleration structures. For more information, see the Metal Performance Shaders framework documentation.
Synch your Core Data store with CloudKit, giving users of your app seamless access to their data across all their devices. Core Data with CloudKit combines the benefits of local persistence with cloud backup and distribution. To learn more, see Mirroring a Core Data Store with CloudKit.
With Core NFC framework, your apps can now support tag writing, including writing to NDEF formatted tags. The framework also provides supports for reading and writing tags using native protocols such as ISO 7816, MIFARE, ISO 15693, and FeliCa. For more information, see the Core NFC framework documentation.
Your apps can provide reservation information to Siri with context and at specific times so the user can take relevant actions based on the circumstances. For example, they can confirm a hotel reservation, be reminded to check in for a flight, and get help returning a rental car. For more information, see Siri Event Suggestions.