Hi!
How to define and call an inline function in Metal? Or simple function that will return some value.
Case:
inline uint index4D(constant _4D& shape,
constant uint& n,
constant uint& c,
constant uint& h,
constant uint& w) {
return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w;
}
When I call it in my kernel function I get No matching function for call error.
Thx in advance.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hello,
I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs).
I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone.
My app does not use Lidar so it's not the problem.
I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible).
Did I miss something that could explain these restrictions ?
Is there any plan to make this class usable by every iPhone enough powerful to run it ?
Thanks in advance for answering me
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS.
The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen.
I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in:
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
magnifiedView?.addSubview(self)
}
can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass.
Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work.
Any pointers would be greatly appreciated.
.
.
.
Full code below:
import UIKit
public class MagnifyingGlassView: UIView {
public weak var magnifiedView: UIView? = nil {
didSet {
removeFromSuperview()
magnifiedView?.addSubview(self)
}
}
public var magnifiedPoint: CGPoint = .zero {
didSet {
center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y)
}
}
public var offset: CGPoint = .zero
public var radius: CGFloat = 50 {
didSet {
frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2))
layer.cornerRadius = radius
crosshair.path = crosshairPath(for: radius)
}
}
public var scale: CGFloat = 2
public var borderColor: UIColor = .lightGray {
didSet {
layer.borderColor = borderColor.cgColor
}
}
public var borderWidth: CGFloat = 3 {
didSet {
layer.borderWidth = borderWidth
}
}
public var showsCrosshair = true {
didSet {
crosshair.isHidden = !showsCrosshair
}
}
public var crosshairColor: UIColor = .lightGray {
didSet {
crosshair.strokeColor = crosshairColor.cgColor
}
}
public var crosshairWidth: CGFloat = 5 {
didSet {
crosshair.lineWidth = crosshairWidth
}
}
private let crosshair: CAShapeLayer = CAShapeLayer()
public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) {
self.init(frame: .zero)
layer.masksToBounds = true
layer.addSublayer(crosshair)
defer {
self.offset = offset
self.radius = radius
self.scale = scale
self.borderColor = borderColor
self.borderWidth = borderWidth
self.showsCrosshair = showsCrosshair
self.crosshairColor = crosshairColor
self.crosshairWidth = crosshairWidth
}
}
public func magnify(at point: CGPoint) {
guard magnifiedView != nil else { return }
magnifiedPoint = point
layer.setNeedsDisplay()
}
private func crosshairPath(for radius: CGFloat) -> CGPath {
let path = CGMutablePath()
path.move(to: .init(x: radius, y: 0))
path.addLine(to: .init(x: radius, y: bounds.height))
path.move(to: .init(x: 0, y: radius))
path.addLine(to: .init(x: bounds.width, y: radius))
return path
}
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
//If above disabled, no change
//Possible that nothing's being rendered into context
//Could it be that SKScene view has no layer?
magnifiedView?.addSubview(self)
}
}
I would like to preload and use some images for both SpriteKit and SceneKit models (my game uses SceneKit with a SpriteKit overlay), and as far as I can see the only efficient way would be to create and preload SKTexture objects which can be supplied to SKSpriteNode(texture:) and SCNMaterial.diffuse.contents.
The problem is that SKTexture are rendered too bright in SceneKit, for some unknown reason. Here a comparison between rendering an image (from URL) and a SKTexture:
And the code that produces it:
let url = Bundle.main.url(forResource: "art.scnassets/texture.png", withExtension: nil)!
let plane1 = SCNPlane(width: 10, height: 10)
plane1.firstMaterial!.diffuse.contents = url.path
let node1 = SCNNode(geometry: plane1)
node1.position.x = -5
scene.rootNode.addChildNode(node1)
let plane2 = SCNPlane(width: 10, height: 10)
plane2.firstMaterial!.diffuse.contents = SKTexture(image: NSImage(byReferencing: url))
let node2 = SCNNode(geometry: plane2)
node2.position.x = 5
scene.rootNode.addChildNode(node2)
This issue was already mentioned in this other post, but since I wasn't notified of the reply from Quinn asking about the feedback number I created at the time, it didn't make any progress.
Can VisionOS take screenshots besides simultaneously pressing buttons
I want to know why there is no video frame data when RePlaykit enters the background and then enters the foreground?
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
I'm experiencing a strange issue where I'm seeing black in a metal drawable where it should be a different color. When I capture the frame and inspect the returned value from the fragment function, it's correct, but the drawable isn't.
This screenshot hopefully illustrates the issue.
I've not found any references to similar issues. I saw something about some out of bounds or NaN values being dropped to 0 (which would be black), but the debugger doesn't indicate this is happening.
The welcome banner is off the top left side of the screen instead of coming down the center. This behavior is encountered when running my iOS application on macOS.
Hi, does anyone know if it is an easy way to determine the distance between floor and ceiling in vision Pro?
When generating large arrays of random numbers, NaNs show up. They also show up at the same indices when using the same seed, leading me to believe that this is a bug with MPSMatrixRandom's normally distributed Float32 random number distribution.
Happens with both Philox and MTGP32.
Is this intentional and how do I work around this?
See the original post for a MWE in Swift and Julia: https://github.com/JuliaGPU/Metal.jl/issues/474
Hi,
A user sent us a crash report that indicates an error occurring just after loading the default Metal library of our app.
Application Specific Information:
Crashing on exception: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array
The report pointed me to these (simplified) lines of codes in the library setup:
_vertexFunctions = [[NSMutableArray alloc] init];
_fragmentFunctions = [[NSMutableArray alloc] init];
id<MTLLibrary> library = [device newDefaultLibrary];
2 vertex shaders and 5 fragment shaders are then loaded and stored in these two arrays using this method:
-(BOOL) addShaderNamed:(NSString *)name library:(id<MTLLibrary>)library isFragment:(BOOL)isFragment {
id shader = [library newFunctionWithName:name];
if (!shader) {
ALOG(@"Error : Unable to find the shader named : “%@”", name);
return NO;
}
[(isFragment ? _fragmentFunctions : _vertexFunctions) addObject:shader];
return YES;
}
As you can see, the arrays are not filled if the method fails... however, a few lines later, they are used without checking if they are really filled, and that causes the crash...
But this coding error doesn't explain why no shader of a certain type (or both types) have been added to the array, meaning: why -newFunctionWithName: returned nil for all given names (since the implied array appears completely empty)?
Clue
This error has only be detected once by a user running the app on macOS 10.13 with a NVIDIA Web Driver instead of the default macOS graphic driver. Moreover, it wasn't possible to reproduce the problem on the same OS using the native macOS driver.
So my question is: is it some known conflicts between NVIDIA drivers and the use of Metal libraries? Or does this case would require some specific options in the Metal implementation?
Any help appreciated, thanks!
I have this minimum repro code:
import SpriteKit
import GameplayKit
class MyGameScene3D: SCNScene {
weak var node3D: MyNode3D!
override init() {
super.init()
background.contents = UIColor.green
let playground = SCNNode()
playground.boundingBox = (
min: SCNVector3(x: 0, y: 0, z: 0),
max: SCNVector3(x: 10, y: 10, z: 10))
let box = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0))
box.position = SCNVector3(x: 5, y: 5, z: 5)
playground.addChildNode(box)
playground.position = SCNVector3(x: 0, y: 0, z: 0)
rootNode.addChildNode(playground)
let light = SCNLight()
light.type = .ambient
let lightNode = SCNNode()
lightNode.light = light
rootNode.addChildNode(lightNode)
let camera = SCNCamera()
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.eulerAngles = SCNVector3(x: -3.14/2, y: 0, z: 0)
cameraNode.position = SCNVector3(x: 5, y: 11, z: 5)
rootNode.addChildNode(cameraNode)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func handleTouchBegan(_ location: CGPoint) {
let res = node3D.hitTest(location)
print(res)
}
}
class MyNode3D: SK3DNode {
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let scene = scnScene as! MyGameScene3D
let location = touch.location(in: self)
print(location)
scene.handleTouchBegan(location)
}
}
class GameScene: SKScene {
init() {
super.init(size: CGSize(width: 500, height: 1000))
self.backgroundColor = .red
let node3D = MyNode3D()
let scene3D = MyGameScene3D()
node3D.scnScene = scene3D
scene3D.node3D = node3D
node3D.isUserInteractionEnabled = true
node3D.viewportSize = CGSize(width: 100, height: 200)
node3D.position = CGPoint(x: 50, y: 100)
addChild(node3D)
let up = SKSpriteNode(color: .blue, size: CGSize(width: 500, height: 10))
up.anchorPoint = CGPoint(x: 0, y:0)
up.position = CGPoint(x:0, y:200)
addChild(up)
let right = SKSpriteNode(color: .gray, size: CGSize(width: 10, height: 500))
right.anchorPoint = CGPoint(x:0,y: 0)
right.position = CGPoint(x:100, y:0)
addChild(right)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
Basically, I have a SK3DNode of size 100x200, positioned at lower left corner of the screen (see screenshot below).
Then in this SK3DNode, I have a SCNScene, where I put a 10x10x10 Playground node at position (0, 0, 0). Then I put a camera node right at the top of the Playground at position (5, 11, 5), and the camera looks down along the -y axis, with euler angle = (-90, 0, 0).
Then in this Playground, I put a small box of size 1x1x1, at the center of the Playground at (5, 5, 5).
The 2 long bars (gray & blue) are just there to indicate the boundary of the SK3DNode.
The result rendering is correct (see screenshot below). However, I can't get the hit test working. I tap on the center 1x1x1 box on screen, get the right coordinate printed out, but the hit test result is empty. I want to be get the center 1x1x1 box when hitting there. How can I do so?
Update:
I tried to loop through all the pixels from -2000 to 2000, and still no hit:
func handleTouchBegan(_ location: CGPoint) {
for x in -2000...2000 {
print("handling x: \(x)")
for y in -2000...2000 {
let res = node3D.hitTest(location)
if !res.isEmpty {
print("\(x), \(y), \(res)")
}
}
}
print("Done")
}
Updated to GPT 2 with the app format and whisky in October. I'm using steam in GPT to play the windows only games. The games consistently crash my computer to restart when I played games like Liar's Bar and Phasmophobia, the latter even with its performance optimization update released today. Both of them do that when I load into the game room, so the bar/ghost house or even while loading. It ran Outpath properly though, which was something with much lower quality than LB and Phas. I don't think it's a simple RAM config issue because Phasmophobia ran fine for me with older versions in GPT 1 although I did experience occasional crashes but I think those were only crashing the game not my mac. I'm not sure what I'm doing wrong that I can't run those games or if it's a GPT problem. I'm using a MPB Pro M1 with 16GB memory on Sequoia 15.0.1.
Hi,
When using a High Definition Display, is there a way to render at exactly the target resolution on the physical screen?
My understanding is that the default behavior is to render to a backing store with a resolution (in pixels) which can be twice the size of the logical resolution (in points). Then we let the OS handle the down-scaling to the actual target resolution on the screen. This is all nice for non-graphics intensive apps, but it means that my game will render at a higher resolution than needed, which seems like an obvious loss of performance.
My expectation is that, for graphics intensive application such as games, we should be able to query and render to the final resolution on the display. Can it / should it be done?
Thank you for your help :)
FYI I did find a document which explains how to setup your CAMetalLayer to render at a custom resolution. I suspect that this may be what I have to do?
Hi there,
With a couple of other developers we have been busy with migrating our SpriteKit games and frameworks to Swift 6.
There is one issue we are unable to resolve, and this involves the interaction between SpriteKit and GameplayKit.
There is a very small demo repo created that clearly demonstrates the issue. It can be found here:
https://github.com/AchrafKassioui/GameplayKitExplorer/blob/main/GameplayKitExplorer/Basic.swift
The relevant code also pasted here:
import SwiftUI
import SpriteKit
struct BasicView: View {
var body: some View {
SpriteView(scene: BasicScene())
.ignoresSafeArea()
}
}
#Preview {
BasicView()
}
class BasicScene: SKScene {
override func didMove(to view: SKView) {
size = view.bounds.size
anchorPoint = CGPoint(x: 0.5, y: 0.5)
backgroundColor = .gray
view.isMultipleTouchEnabled = true
let entity = BasicEntity(color: .systemYellow, size: CGSize(width: 100, height: 100))
if let renderComponent = entity.component(ofType: BasicRenderComponent.self) {
addChild(renderComponent.sprite)
}
}
}
@MainActor
class BasicEntity: GKEntity {
init(color: SKColor, size: CGSize) {
super.init()
let renderComponent = BasicRenderComponent(color: color, size: size)
addComponent(renderComponent)
let animationComponent = BasicAnimationComponent()
addComponent(animationComponent)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
@MainActor
class BasicRenderComponent: GKComponent {
let sprite: SKSpriteNode
init(color: SKColor, size: CGSize) {
self.sprite = SKSpriteNode(texture: nil, color: color, size: size)
super.init()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
class BasicAnimationComponent: GKComponent {
let action1 = SKAction.scale(to: 1.3, duration: 0.07)
let action2 = SKAction.scale(to: 1, duration: 0.15)
override init() {
super.init()
}
override func didAddToEntity() {
if let renderComponent = entity?.component(ofType: BasicRenderComponent.self) {
renderComponent.sprite.run(SKAction.repeatForever(SKAction.sequence([action1, action2])))
}
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
As SKNode is designed to run on the MainActor, the BasicRenderComponent is attributed with MainActor as well. This is needed as this GKComponent is dedicated to encapsulate the node that is rendered to the scene.
There is also a BasicAnimationComponent, this GKComponent is responsible for animating the rendered node.
Obviously, this is just an example, but when using GameplayKit in combination with SpriteKit it is very common that a GKComponent instance manipulates an SKNode referenced from another GKComponent instance, often done via open func update(deltaTime seconds: TimeInterval) or as in this example, inside didAddToEntity.
Now, the problem is that in the above example (but the same goes foupdate(deltaTime seconds: TimeInterval) the methoddidAddToEntity is not isolated to the MainActor, as GKComponent is not either.
This leads to the error Call to main actor-isolated instance method 'run' in a synchronous nonisolated context, as indeed the compiler can not infer that didAddToEntity is isolated to the MainActor.
Marking BasicAnimationComponent as @MainActor does not help, as this isolation is not propogated back to the superclass inherited methods.
In fact, we tried a plethora of other options, but none resolved this issue.
How should we proceed with this? As of now, this is really holding us back migrating to Swift 6. Hope someone is able to help out here!
I've been running my SceneKit game for many weeks in Xcode without performance issues. The game itself is finished, so I thought I could go on with publishing it on the App Store, but when archiving it in Xcode and running the archived app, I noticed that it seriously hangs.
The hangs only seem to happen when I run the game in fullscreen mode. I tried disabling game mode, but the hangs still happen. Only when I run in windowed mode the game runs smoothly.
Instruments confirms that there are many serious hangs, but it also reports that CPU usage is quite low during those hangs, on average about 15%. From what I know, hangs happen when the main thread is busy, but how can that be when CPU usage is so low, and why does it only happen in fullscreen mode for release builds?
In the simplest case I can come up with, I create a scene (either fully or partially in code) with a single dynamic body, located slightly away from the origin.
I give the body a charge as well as adding an electric field to the node. Body does nothing (as to be expected, since it's the source of the field).
However if I replace that field with a custom field (does nothing except reports back the passed in position value) the position shown is the location of the body in the local space of its parent (in this case, the root node) rather than the node the field is attached to (i.e. itself).
I've attached the code customising the SwiftUI app template. Hopefully someone can tell me what I'm doing wrong?
ContentView customisation…
struct ContentView: View
{
var body: some View
{
SceneView(scene: ElectricScene(), options: [.allowsCameraControl, .autoenablesDefaultLighting])
}
}
And the code to create the scene…
import Foundation
import SceneKit
class ElectricScene: SCNScene
{
override init()
{
super.init()
physicsWorld.gravity = SCNVector3(0, 0, 0)
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 10)
rootNode.addChildNode(cameraNode)
let ballNode = SCNNode(geometry: SCNSphere(radius: 0.5))
ballNode.position = SCNVector3(2, 0, 0)
ballNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
ballNode.physicsBody?.charge = -1
rootNode.addChildNode(ballNode)
// ballNode.physicsField = SCNPhysicsField.electric()
ballNode.physicsField = SCNPhysicsField
.customField {position, _, _, _, _ in
print(position)
return SCNVector3Zero
}
}
@available(*, unavailable)
required init?(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
}
This (repeatedly) prints out the following…
SCNVector3(x: 2.0, y: 0.0, z: 0.0)
…which is the position of the node relative to the root node, rather than relative to the source of the field (itself).
I am looking to implement CAMetalDisplayLink on a separate thread on a macOS application. I am basing my implementation on the following example project:
Achieving Smooth Frame Rates with Metal Display Link
This project allows you to configure whether a separate thread is used for rendering by setting RENDER_ON_MAIN_THREAD in GameConfig to 0. However, when I set it to use a separate thread nothing is rendered. Stepping through the code shows that a separate thread is created, but a CAMetalDisplayLinkUpdate is never received. Does anyone know why this does not work?