In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this:
// The 16 byte aligned size of our uniform structures
let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100
If I understood correctly, this line of code does this:
Calculates the size of the SharedUniforms structure in bytes
Clears out the last 8 bits of the size representation
Adds 256 bytes to the size
So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests.
Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement.
Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified.
Can you give me a detailed solution using code?
// ViewController.swift
import UIKit
import ARKit
import SceneKit
import simd
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{
@IBOutlet weak var sceneView: ARSCNView!
let vertexIndicesOfInterest = [250]
var customFaceGeometry: CustomFaceGeometry!
var scnFaceGeometry: SCNGeometry!
private var faceUvGenerator: FaceTextureGenerator!
var faceGeometry: ARSCNFaceGeometry!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARFaceTrackingConfiguration()
sceneView.session.run(configuration)
}
}
extension ViewController {
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor)
let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry)
customFaceGeometry.geometry.firstMaterial?.fillMode = .lines
customFaceGeometry.geometry.firstMaterial?.transparency = 0.0
customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true
node.addChildNode(customGeometryNode)
}
func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceMeshNode = node.childNodes.first else { return }
DispatchQueue.main.async {
self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode)
}
}
}
class CustomFaceGeometry {
var geometry: SCNGeometry
let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png")
init(fromFaceAnchor faceAnchor: ARFaceAnchor) {
self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)!
}
static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry {
var vertices = vertices_o
let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size)
let vertexSource = SCNGeometrySource(data: vertexData,
semantic: .vertex,
vectorCount: vertices.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<SCNVector3>.stride)
let indices: [Int32] = Array(0..<Int32(vertices.count))
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size)
return SCNGeometry(sources: [vertexSource], elements: [element])
}
static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry
let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) }
return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices)
}
func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) {
if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) {
node.geometry = newGeometry
let lipstickNode = SCNNode(geometry: newGeometry)
let lipstickTextureMaterial = SCNMaterial()
lipstickTextureMaterial.diffuse.contents = lipImage
lipstickTextureMaterial.transparency = 1.0
lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial
node.geometry?.firstMaterial?.fillMode = .lines
node.geometry?.firstMaterial?.transparency = 0.5
}
}
static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? {
let faceGeometry = faceAnchor.geometry
var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) }
print(vertices[250])
let ll_ratio_y = Float(0.969999)
vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z)
vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z)
vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z)
vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z)
vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z)
vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z)
vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z)
vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z)
vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z)
let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size)
let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride)
let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init)
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size)
return SCNGeometry(sources: [vertexSource], elements: [element])
}
}
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]):
colorSample.x = in.texCoord.x;
Which produce this result:
However, if I make a small change to the code like this:
colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x;
Then it will produce this result:
If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression.
Is this a bug in fast-math mode? How should I circumvent this problem?
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation?
My challenge here is to successfully integrate UIDevice rotation and creating a new UIBezierPath every time the UIDevice is rotated.
(Please accept my apologies for this Post’s length .. but I can’t seem to avoid it)
As a preamble, I have bounced back and forth between
NotificationCenter.default.addObserver(self,
selector: #selector(rotated),
name: UIDevice.orientationDidChangeNotification,
object: nil)
called within my viewDidLoad() together with
@objc func rotated() {
}
and
override func viewWillLayoutSubviews() {
// please see code below
}
My success was much better when I implemented viewWillLayoutSubviews(), versus rotated() .. so let me provide detailed code just for viewWillLayoutSubviews().
I have concluded that every time I rotate the UIDevice, a new UIBezierPath needs to be generated because positions and sizes of my various SKSprieNodes change.
I am definitely not saying that I have to create a new UIBezierPath with every rotation .. just saying I think I have to.
Start of Code
// declared at the top of my `GameViewController`:
var myTrain: SKSpriteNode!
var savedTrainPosition: CGPoint?
var trackOffset = 60.0
var trackRect: CGRect!
var trainPath: UIBezierPath!
My UIBezierPath creation and SKAction.follow code is as follows:
// called with my setTrackPaths() – see way below
func createTrainPath() {
// savedTrainPosition initially set within setTrackPaths()
// and later reset when stopping + resuming moving myTrain
// via stopFollowTrainPath()
trackRect = CGRect(x: savedTrainPosition!.x,
y: savedTrainPosition!.y,
width: tracksWidth,
height: tracksHeight)
trainPath = UIBezierPath(ovalIn: trackRect)
trainPath = trainPath.reversing() // makes myTrain move CW
} // createTrainPath
func startFollowTrainPath() {
let theSpeed = Double(5*thisSpeed)
var trainAction = SKAction.follow(
trainPath.cgPath,
asOffset: false,
orientToPath: true,
speed: theSpeed)
trainAction = SKAction.repeatForever(trainAction)
createPivotNodeFor(myTrain)
myTrain.run(trainAction, withKey: runTrainKey)
} // startFollowTrainPath
func stopFollowTrainPath() {
guard myTrain == nil else {
myTrain.removeAction(forKey: runTrainKey)
savedTrainPosition = myTrain.position
return
}
} // stopFollowTrainPath
Here is the detailed viewWillLayoutSubviews I promised earlier:
override func viewWillLayoutSubviews() {
super.viewWillLayoutSubviews()
if (thisSceneName == "GameScene") {
// code to pause moving game pieces
setGamePieceParms() // for GamePieces, e.g., trainWidth
setTrackPaths() // for trainPath
reSizeAndPositionNodes()
// code to resume moving game pieces
} // if (thisSceneName == "GameScene")
} // viewWillLayoutSubviews
func setGamePieceParms() {
if (thisSceneName == "GameScene") {
roomScale = 1.0
let roomRect = UIScreen.main.bounds
roomWidth = roomRect.width
roomHeight = roomRect.height
roomPosX = 0.0
roomPosY = 0.0
tracksScale = 1.0
tracksWidth = roomWidth - 4*trackOffset // inset from screen edge
#if os(iOS)
if UIDevice.current.orientation.isLandscape {
tracksHeight = 0.30*roomHeight
}
else {
tracksHeight = 0.38*roomHeight
}
#endif
// center horizontally
tracksPosX = roomPosX
// flush with bottom of UIScreen
let temp = roomPosY - roomHeight/2
tracksPosY = temp + trackOffset + tracksHeight/2
trainScale = 2.8
trainWidth = 96.0*trainScale // original size = 96 x 110
trainHeight = 110.0*trainScale
trainPosX = roomPosX
#if os(iOS)
if UIDevice.current.orientation.isLandscape {
trainPosY = temp + trackOffset + tracksHeight + 0.30*trainHeight
}
else {
trainPosY = temp + trackOffset + tracksHeight + 0.20*trainHeight
}
#endif
} // setGamePieceParms
// a work in progress
func setTrackPaths() {
if (thisSceneName == "GameScene") {
if (savedTrainPosition == nil) {
savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY)
}
else {
savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY)
}
createTrainPath()
} // if (thisSceneName == "GameScene")
} // setTrackPaths
func reSizeAndPositionNodes() {
myTracks.size = CGSize(width: tracksWidth, height: tracksHeight)
myTracks.position = CGPoint(x: tracksPosX, y: tracksPosY)
// more Nodes here ..
}
End of Code
My theory says when I call setTrackPaths() with every UIDevice rotation, createTrainPath() is called.
Nothing happens of significance visually as far as the UIBezierPath is concerned .. until I call startFollowTrainPath().
Bottom Line
It is then that I see for sure that a new UIBezierPath has not been created as it should have been when I called createTrainPath() when I rotated the UIDevice.
The new UIBezierPath is not new, but the old one.
If you’ve made it this far through my long code, the question is what do I need to do to make a new UIBezierPath that fits the resized and repositioned SKSpriteNode?
Hi, I'm displaying linear gray by CAMetalLayer with the shader below.
fragment float4 fragmentShader(VertexOut in [[stage_in]],
texture2d<float, access::sample> BGRATexture [[ texture(0) ]])
{
float color = in.texCoordinates.x;
return float4(float3(color), 1.0);
}
And my CAMetalLayer has been set to linearSRGB.
metalLayer.colorspace = CGColorSpace(name: CGColorSpace.linearSRGB)
metalLayer.pixelFormat = .bgra8Unorm
Why the display seem add gamma? Apparently the middle gray is 187 but not 128.
Hi there, I have some existing metal rendering / shader views that I would like to use to present stereoscopic content on the Vision Pro. Is there a metal shader function / variable that lets me know which eye we're currently rendering to inside my shader? Something like Unity's unity_StereoEyeIndex? I know RealityKit has GeometrySwitchCameraIndex, so I want something similar (but outside of a RealityKit context).
Many thanks,
Rich
I've been attempting to use the new CAMetalDisplayLink to simplify the code needed to sync my rendering with the display across Apple platforms. One thing I noticed since moving to using CAMetalDisplayLink is that the Metal Performance HUD which I had previously been using to analyze the total memory used by my app (among other things) is suddenly no longer appearing when using CAMetalDisplayLink.
This issue can be reproduced with the Frame Pacing sample from WWDC23
Anyone from Apple know if this is expected behavior or have an idea on how to get this to work properly?
I've filed FB13495684 for official review.
Hi there -
Where would a dev go these days to get an initial understanding of SceneKit?
The WWDC videos linked in various places seem to be gone?!
For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/
Any advice..?
Cheers,
Jan
I'm using DrawableQueue to create textures that I apply to my ShaderGraphMaterial texture. My metal render is using a range of alpha values as a test.
My objects displayed with the DrawableQueue texture are working as expected, but the alpha component is not working.
Is this an issue with my DrawableQueue descriptor? My ShaderGraphMaterial? A missing setting on my scene objects? or some limitation in visionOS?
DrawableQueue descriptor
let descriptor = await TextureResource.DrawableQueue.Descriptor(
pixelFormat: .rgba8Unorm,
width: textureResource!.width,
height: textureResource!.height,
usage: [.renderTarget, .shaderRead, .shaderWrite], // Usage should match the requirements for how the texture will be used
//usage: [.renderTarget], // Usage should match the requirements for how the texture will be used
mipmapsMode: .none // Assuming no mipmaps are needed for the text texture
)
let queue = try await TextureResource.DrawableQueue(descriptor)
queue.allowsNextDrawableTimeout = true
await textureResource!.replace(withDrawables: queue)
Draw frame:
guard
let drawable = try? drawableQueue!.nextDrawable(),
let commandBuffer = commandQueue?.makeCommandBuffer()//,
//let renderPipelineState = renderPipelineState
else {
return
}
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = clearColor
/*renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(
red: clearColor.red,
green: clearColor.green,
blue: clearColor.blue,
alpha: 0.5 )*/
renderPassDescriptor.renderTargetHeight = drawable.texture.height
renderPassDescriptor.renderTargetWidth = drawable.texture.width
guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
return
}
renderEncoder.pushDebugGroup("DrawNextFrameWithColor")
//renderEncoder.setRenderPipelineState(renderPipelineState)
// No need to create a render command encoder with shaders, as we are only clearing the drawable.
// Since we are just clearing the drawable to a solid color, no need to draw primitives
renderEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
drawable.present()
}
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy?
A specific example would be great.
Thank you!
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3.
In beta 4, getting these errors:
Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject'
Does anyone have a fix?
Thanks
Is MTKView intentionally unavailable on visionOS or is this an issue with the current beta?
I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session.
I have a few unanswered questions:
Where is the sample code available from?
Are the new Object Capture APIs on iOS limited to certain devices?
Can we capture images from the front facing cameras?
Hi! I am currently trying to upload my iOS app to App Store Connect. Unfortunately, code signing fails with the following error: "Code object is not signed at all.", referencing a binary Metallib (created with metal-tt and an mtlp-json script). I am using Xcode's automatically managed signing and the binary metallib is located inside the "Resources" directory of a framework that I am including with "Embed and sign" in the app. Could anyone give some guidance on what I need to change to make code signing work? Thank you.
, it is after update to Xcode 14.3:
[default] CGSWindowShmemCreateWithPort failed on port 0
I am running the RoomPlan Demo app and keep getting the above error and when I try to find someplace to get the archive in the Metal Libraries my searches come up blank. There are no files that show up in a search that contain such identifiers. A number of messages are displayed about "deprecated" interfaces also. Is it normal to send out demo apps that are hobbled in this way?
Guys,
In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for.
However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper.
This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey.
I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
Hi,
The metal-cpp distribution appears to only contain headers for Foundation and Quartzcore. The LearnMetalCPP download [1] provides a ZIP with an metal-cpp-extensions directory containing AppKit.hpp and MetalKit.hpp headers. First question: Are these headers distributed anywhere else more publicly? Without these headers only the renderer can be fully written in C++ as far as I can tell, i.e. no complete C++ NSApplication. Second question: Will these headers, if needed, be maintained (e.g. updated and/or extended) by Apple along side metal-cpp?
[1] https://developer.apple.com/metal/cpp/
Thank you and regards.