스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
iOS 또는 iPadOS 게임을 visionOS로 가져오기
iOS 또는 iPadOS 게임을 visionOS만의 특별한 경험으로 변환하는 방법을 알아보세요. 3D 프레임 또는 몰입형 배경을 사용하여 몰입감은 물론 재미 요소를 강화할 수 있습니다. 스테레오스코피 또는 머리 추적 기능으로 윈도우에 심도를 더해 플레이어가 더욱 몰입할 수 있게 해보세요.
챕터
- 0:00 - Introduction
- 1:42 - Render on visionOS
- 3:48 - Compatible to native
- 6:41 - Add a frame and a background
- 8:00 - Enhance the rendering
리소스
관련 비디오
WWDC24
- 매력적인 공간 비디오 및 사진 경험 빌드하기
- iOS, macOS, visionOS용 RealityKit API 알아보기
- RealityKit으로 공간 드로잉 앱 빌드하기
- visionOS에서 Metal 콘텐츠를 패스스루와 통합하여 렌더링하기
- visionOS의 게임 입력 방식 살펴보기
WWDC23
-
다운로드Array
-
-
5:44 - Render with Metal in a UIView
// Render with Metal in a UIView. class CAMetalLayerBackedView: UIView, CAMetalDisplayLinkDelegate { var displayLink: CAMetalDisplayLink! override class var layerClass : AnyClass { return CAMetalLayer.self } func setup(device: MTLDevice) { let displayLink = CAMetalDisplayLink(metalLayer: self.layer as! CAMetalLayer) displayLink.add(to: .current, forMode: .default) self.displayLink.delegate = self } func metalDisplayLink(_ link: CAMetalDisplayLink, needsUpdate update: CAMetalDisplayLink.Update) { let drawable = update.drawable renderFunction?(drawable) } }
-
6:20 - Render with Metal to a RealityKit LowLevelTexture
// Render Metal to a RealityKit LowLevelTexture. let lowLevelTexture = try! LowLevelTexture(descriptor: .init( pixelFormat: .rgba8Unorm, width: resolutionX, height: resolutionY, depth: 1, mipmapLevelCount: 1, textureUsage: [.renderTarget] )) let textureResource = try! TextureResource( from: lowLevelTexture ) // assign textureResource to a material let commandBuffer: MTLCommandBuffer = queue.makeCommandBuffer()! let mtlTexture: MTLTexture = texture.replace(using: commandBuffer) // Draw into the mtlTexture
-
7:06 - Metal viewport with a 3D RealityKit frame around it
// Metal viewport with a 3D RealityKit frame // around it. struct ContentView: View { @State var game = Game() var body: some View { ZStack { CAMetalLayerView { drawable in game.render(drawable) } RealityView { content in content.add(try! await Entity(named: "Frame")) }.frame(depth: 0) } } }
-
7:45 - Windowed game with an immersive background
// Windowed game with an immersive background @main struct TestApp: App { @State private var appModel = AppModel() var body: some Scene { WindowGroup { // Metal render ContentView(appModel) } ImmersiveSpace(id: "ImmersiveSpace") { // RealityKit background ImmersiveView(appModel) }.immersionStyle(selection: .constant(.progressive), in: .progressive) } }
-
13:11 - Render to multiple views for stereoscopy
// Render to multiple views for stereoscopy. override func draw(provider: DrawableProviding) { encodeShadowMapPass() for viewIndex in 0..<provider.viewCount { scene.update(viewMatrix: provider.viewMatrix(viewIndex: viewIndex), projectionMatrix: provider.projectionMatrix(viewIndex: viewIndex)) var commandBuffer = beginDrawableCommands() if let color = provider.colorTexture(viewIndex: viewIndex, for: commandBuffer), let depthStencil = provider.depthStencilTexture(viewIndex: viewIndex, for: commandBuffer) { encodePass(into: commandBuffer, color: color, depth: depth) } endFrame(commandBuffer) } }
-
13:55 - Query the head position from ARKit every frame
// Query the head position from ARKit every frame. import ARKit let arSession = ARKitSession() let worldTracking = WorldTrackingProvider() try await arSession.run([worldTracking]) // Every frame guard let deviceAnchor = worldTracking.queryDeviceAnchor( atTimestamp: CACurrentMediaTime() + presentationTime ) else { return } let transform: simd_float4x4 = deviceAnchor .originFromAnchorTransform
-
14:22 - Convert the head position from the ImmersiveSpace to a window
// Convert the head position from the ImmersiveSpace to a window. let headPositionInImmersiveSpace: SIMD3<Float> = deviceAnchor .originFromAnchorTransform .position let windowInImmersiveSpace: float4x4 = windowEntity .transformMatrix(relativeTo: .immersiveSpace) let headPositionInWindow: SIMD3<Float> = windowInImmersiveSpace .inverse .transform(headPositionInImmersiveSpace) renderer.setCameraPosition(headPositionInWindow)
-
15:05 - Query the head position from ARKit every frame
// Query the head position from ARKit every frame. import ARKit let arSession = ARKitSession() let worldTracking = WorldTrackingProvider() try await arSession.run([worldTracking]) // Every frame guard let deviceAnchor = worldTracking.queryDeviceAnchor( atTimestamp: CACurrentMediaTime() + presentationTime ) else { return } let transform: simd_float4x4 = deviceAnchor .originFromAnchorTransform
-
15:47 - Build the camera and projection matrices
// Build the camera and projection matrices. let cameraPosition: SIMD3<Float> let viewportBounds: BoundingBox // Camera facing -Z let cameraTransform = simd_float4x4(AffineTransform3D(translation: Size3D(cameraPosition))) let zNear: Float = viewportBounds.max.z - cameraPosition.z let l /* left */: Float = viewportBounds.min.x - cameraPosition.x let r /* right */: Float = viewportBounds.max.x - cameraPosition.x let b /* bottom */: Float = viewportBounds.min.y - cameraPosition.y let t /* top */: Float = viewportBounds.max.y - cameraPosition.y let cameraProjection = simd_float4x4(rows: [ [2*zNear/(r-l), 0, (r+l)/(r-l), 0], [ 0, 2*zNear/(t-b), (t+b)/(t-b), 0], [ 0, 0, 1, -zNear], [ 0, 0, 1, 0] ])
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.