-
Explore ARKit 4
ARKit 4 enables you to build the next generation of augmented reality apps to transform how people connect with the world around them. We'll walk you through the latest improvements to Apple's augmented reality platform, including how to use Location Anchors to connect virtual objects with a real-world longitude, latitude, and altitude. Discover how to harness the LiDAR Scanner on iPad Pro and obtain a depth map of your environment. And learn how to track faces in AR on more devices, including the iPad Air (3rd generation), iPad mini (5th generation), and all devices with the A12 Bionic chip or later that have a front-facing camera.
To get the most out of this session, you should be familiar with how your apps can take advantage of LiDAR Scanner on iPad Pro. Watch “Advanced Scene Understanding in AR” for more information.
Once you've learned how to leverage ARKit 4 in your iOS and iPadOS apps, explore realistic rendering improvements in “What's New in RealityKit” and other ARKit features like People Occlusion and Motion Capture with “Introducing ARKit 3”.Recursos
- Displaying a point cloud using scene depth
- Tracking geographic locations in AR
- Creating a fog effect using scene depth
- ARKit
Videos relacionados
WWDC21
WWDC20
WWDC19
-
Buscar este video…
-
-
6:58 - Availability
// Check device support for geo-tracking guard ARGeoTrackingConfiguration.isSupported else { // Geo-tracking not supported on this device return } // Check current location is supported for geo-tracking ARGeoTrackingConfiguration.checkAvailability { (available, error) in guard available else { // Geo-tracking not supported at current location return } // Run ARSession let arView = ARView() arView.session.run(ARGeoTrackingConfiguration()) } -
8:38 - Adding Location Anchors
// Create coordinates let coordinate = CLLocationCoordinate2D(latitude: 37.795313, longitude: -122.393792) // Create Location Anchor let geoAnchor = ARGeoAnchor(name: "Ferry Building", coordinate: coordinate) // Add Location Anchor to session arView.session.add(anchor: geoAnchor) // Create a RealityKit anchor entity let geoAnchorEntity = AnchorEntity(anchor: geoAnchor) // Anchor content under the RealityKit anchor geoAnchorEntity.addChild(generateSignEntity()) // Add the RealityKit anchor to the scene arView.scene.addAnchor(geoAnchorEntity) -
10:32 - Positioning Content
// Create a new entity for our virtual content let signEntity = generateSignEntity(); // Add the virtual content entity to the Geo Anchor entity geoAnchorEntity.addChild(signEntity) // Rotate text to face the city let orientation = simd_quatf.init(angle: -Float.pi / 3.5, axis: SIMD3<Float>(0, 1, 0)) signEntity.setOrientation(orientation, relativeTo: geoAnchorEntity) // Elevate text to 35 meters above ground level let position = SIMD3<Float>(0, 35, 0) signEntity.setPosition(position, relativeTo: geoAnchorEntity) -
14:08 - User Interactive Location Anchors
let session = ARSession() let worldPosition = raycastLocationFromUserTap() session.getGeoLocation(forPoint: worldPosition) { (location, altitude, error) in if let error = error { ... } let geoAnchor = ARGeoAnchor(coordinate: location, altitude: altitude) } -
20:32 - Enabling the Depth API
// Enabling the depth API let session = ARSession() let configuration = ARWorldTrackingConfiguration() // Check if configuration and device supports .sceneDepth if type(of: configuration).supportsFrameSemantics(.sceneDepth) { // Activate sceneDepth configuration.frameSemantics = .sceneDepth } session.run(configuration) ... // Accessing depth data func session(_ session: ARSession, didUpdate frame: ARFrame) { guard let depthData = frame.sceneDepth else { return } // Use depth data } -
21:12 - Depth API alongside person occlusion
// Using the depth API alongside person occlusion let session = ARSession() let configuration = ARWorldTrackingConfiguration() // Set required frame semantics let semantics: ARConfiguration.FrameSemantics = .personSegmentationWithDepth // Check if configuration and device supports the required semantics if type(of: configuration).supportsFrameSemantics(semantics) { // Activate .personSegmentationWithDepth configuration.frameSemantics = semantics } session.run(configuration) -
25:41 - Raycasting
let session = ARSession() hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane]) let query = arView.makeRaycastQuery(from: point, allowing: .estimatedPlane, alignment: .any) let raycast = session.trackedRaycast(query) { results in // result updates }
-