I have the higher end M1 Mac Studio, and I have had a lot of success with Metal pipelines. However, I tried to compile a compute pipeline that uses the bfloat type and it seems to have no idea what that is.
Error: program_source:10:55: error: unknown type name 'bfloat'; did you mean 'float'?
Is there an OS update that is necessary for this support?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I'm trying to find a way to reduce synchronization time between two compute shader calls, where one dispatch depends on an atomic counter from the other.
Example: If I have two metal kernels, select and execute, select is looking through the numbers-buffer and stores the index of all numbers < 10 in a new buffer selectedNumberIndices by using an atomic counter. execute is then run counter number of times do do something with those selected indices.
kernel void select (device atomic_uint &counter,
device uint *numbers,
device uint *selectedNumberIndices,
uint id [[thread_position_in_grid]]) {
if(numbers[id] < 10) {
uint idx = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed);
selectedNumbers[idx] = id;
}
}
kernel void execute (device uint *selectedNumberIndices,
uint id [[thread_position_in_grid]]) {
// do something #counter number of times
}
currently I can do this by using .waitUntilCompleted() between the dispatches to ensure I get accurate results, something like:
// select
buffer = queue.makeCommandBuffer()!
encoder = buffer.makeComputeCommandEncoder()!
encoder.setComputePipelineState(selectState)
encoder.setBuffer(counterBuffer, offset: 0, index: 0)
encoder.setBuffer(numbersBuffer, offset: 0, index: 1)
encoder.setBuffer(selectedNumberIndicesBuffer, index: 2)
encoder.dispatchThreads(.init(width: Int(numbersCount), height: 1, depth: 1),
threadsPerThreadgroup: .init(width: selectState.threadExecutionWidth, height: 1, depth: 1))
encoder.endEncoding()
buffer.commit()
// wait
buffer.waitUntilCompleted()
// execute
buffer = queue.makeCommandBuffer()!
encoder = buffer.makeComputeCommandEncoder()!
encoder.setComputePipelineState(executeState)
encoder.setBuffer(selectedNumberIndicesBuffer, index: 0)
var counterValue: uint = 0 // extract the value of the atomic counter
counterBuffer.contents().copyBytes(to: &counterValue, count: MemoryLayout<UInt32>.stride)
encoder.dispatchThreads(.init(width: Int(counterValue), height: 1, depth: 1),
threadsPerThreadgroup: .init(width: executeState.threadExecutionWidth, height: 1, depth: 1))
encoder.endEncoding()
buffer.commit()
My question is if there is any way I can have this same functionality without the costly buffer.waitUntilCompleted() call? Or am I going about this in completely the wrong way, or missing something else?
I create a 3D texture with size 32x32x64 :
Then I write this RWTexture with a compute shader, threadsPerThreadgroup is {32,32,32}
The interesting part is that when I only write the 32 slices and then sample it in a pixel shader, the sampling result is always 0.0.
Only when I fill all the 64 slices, the sampling get what I expected.
//Fill the first 32 slices,
uint3 pixelIndex = threadIdWithinDispatch;
Vol[pixelIndex ] = somedata;
//Fill the last 32 slices
pixelIndex.z += 32;
Vol[pixelIndex ] = somedata;
I am working on some demo with RealityKit, however it seems to me that the default size/position of a RealityKit app is way smaller / further / higher than normal apps.
I am using the RealityKit template as example.
May I know how to adjust this behavior?
So I can observe RealityKit Components by using the new @Observable or using ObservableObject, both of these require my Component to be a class instead of a struct though.
I've read that making a Component a class is a bad idea, is this correct?
Is there any other way to observe values of an entities' components?
Hi everyone, I want to develop some game with a friend for fun. So what is the best MacBook for game development, to run unreal engine or unity ?
Thx
I am attempting to install steam to start running games but I cant get past the "ditto /Volumes/Game\ Porting\ Toolkit1.0/lib/ brew --prefix game-porting-toolkit/lib/" because whenever I run this command it says "ditto: Cannot get the real path for source '/Volumes/Game Porting Toolkit1.0/lib/'" May I please get some help
I am trying to implement a feature to play video in full immersive space but unable to achieve desired results. When I run app in full immersive space, it shows video in center of screen/window. See screenshot below.
Can you please guide/refer me how to implement a behaviour to play video in full immersive space, just like below image
Regards,
Yasir Khan
My game crash when try FetchItems. Help me fix this please
Response appreciated.
Hello.
I'm working with Metal in Apple Vision Pro, and I've assumed that I can use Mesh shaders to work with Meshlets. But when creating the RenderPipeline, I get the following error message: "device does not support mesh shaders". The test is on the simulator, and my question is: Will Apple Vision Pro support Mesh shaders on physical devices?
Thanks.
When running spatial APPS using custom metal render, there's no Metal frame capture button in the debug lines . The "Capture GPU workload" button in the debug menu is also gray. There is no way to analyze metal frames.
The Dynamic library build for the Metal Library fails when built from the downloaded copy. It uses the name of the download as the directory name which has spaces and does not give a syntactically correct file name. Renaming the folder in which the build files reside seems to resolve that problem. Then the header files for the Dynamic library get an access denied error when trying to compile. Why are these demos released with such trivial problems? Surely someone has tried to run it previously and the demo should have been fixed or the short Readme should have been updated with instructions on how to set it up.
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view.
The RealityView
import SwiftUI
import RealityKit
import RealityKitContent
struct TheSphereOfDoomRV: View {
@StateObject var viewModel: SphereViewModel = SphereViewModel()
let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere")
var body: some View {
RealityView { content, attachments in
content.add(sphere)
} update: { content, attachments in
sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale)
} attachments: {
VStack {
Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center)
Button {
} label: {
Text("Play Video!")
}
}.tag("description")
}.modifier(GestureModifier()).environmentObject(viewModel)
}
}
SphereEntity:
import Foundation
import RealityKit
import RealityKitContent
class SphereEntity: Entity {
private let sphere: ModelEntity
@MainActor
required init() {
sphere = ModelEntity()
super.init()
}
init(radius: Float, materials: [Material], name: String) {
sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials)
sphere.generateCollisionShapes(recursive: false)
sphere.components.set(InputTargetComponent())
sphere.components.set(HoverEffectComponent())
sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)]))
sphere.name = name
super.init()
self.addChild(sphere)
self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ...
}
}
So if I drag an entity in RealityView I have to disable the PhysicsBodyComponent to make sure nothing fights dragging the entity around. This makes sense.
When I finish a drag, this closure gets executed:
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { e in
// ...
}
.onEnded { e in
let velocity: CGSize = e.gestureValue.velocity
}
If I now re-add PhysicsBodyComponent to the component I just dragged, and I make it mode: .dynamic it will loose all velocity and drop straight down through gravity.
Instead the solution is to apply mode: .kinematic and also apply a PhysicsMotionComponent component to the entity. This should retain velocity after letting go of the object.
However, I need to instatiate it with PhysicsMotionComponent(linearVelocity: SIMD3<Float>, angularVelocity: SIMD3<Float>).
How can I calculate the linearVelocity and angularVelocity when the e.gestureValue.velocity I get is just a CGSize?
Is there another prop of gestureValue I should be looking at?
What should an app do with an instance of GKGameCenterViewController when the app transitions to the background?
Currently, my app just leaves it in place, displayed on top in full screen. Most of the time when my app resumes to the foreground, the GKGameCenterViewController is still displayed and is functional. However, sometimes when the app resumes, the GKGameCenterViewController's view has vanished and, additionally, my app doesn't receive a GK authentication event, so effectively it is "hung". This seems to happen most often when the app has been in the background a while, such as overnight. The app is still in memory, however, not starting cold.
I would like to leave the GKGameCenterViewController/view in place when the app is backgrounded since the player may return to the game quickly and be right back where they left off. And most of the time that works. However, I need to solve the problem for the times it doesn't as I described above.
Is there any guidance on what to do with a GKGameCenterViewController (or any GK controller for that matter) when an app goes into the background?
Hi,
I implemented it as shown in the link below, but it does not animate.
https://developer.apple.com/videos/play/wwdc2023/10080/?time=1220
The following message was displayed
No bind target found for played animation.
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var body: some View {
RealityView { content in
if let entity = try? await ModelEntity(named: "toy_biplane_idle") {
let bounds = entity.model!.mesh.bounds.extents
entity.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
entity.components.set(HoverEffectComponent())
entity.components.set(InputTargetComponent())
if let toy = try? await ModelEntity(named: "toy_drummer_idle") {
let orbit = OrbitAnimation(
name:"orbit",
duration: 30,
axis:[0, 1, 0],
startTransform: toy.transform,
bindTarget: .transform,
repeatMode: .repeat)
if let animation = try? AnimationResource.generate(with: orbit) {
toy.playAnimation(animation)
}
content.add(toy)
}
content.add(entity)
}
}
}
}
We're getting some strange rendering crashes on various devices running both ios16 and ios17 beta.
The problems all appear when compiling with any of the Xcode 15 betas, including 8. This code has worked fine for years.
The clearest error we get is on the iPhone X and XR where newRenderPipelineStateWithDescriptor returns:
"Inlining all functions due to use of indirect argument bufferbuffer(15): Unable to map argument buffer access to resource"
Buffer 15 is where we stash our textures and looks like this:
typedef struct RegularTextures {
// A bit per texture that's present
uint32_t texPresent [[ id(WKSTexBufTexPresent) ]];
// Texture indirection (for accessing sub-textures)
const metal::array<float, 2*WKSTextureMax> offset [[ id(WKSTexBuffIndirectOffset) ]];
const metal::array<float, 2*WKSTextureMax> scale [[ id(WKSTexBuffIndirectScale) ]];
const metal::array<metal::texture2d<float, metal::access::sample>, WKSTextureMax> tex [[ id(WKSTexBuffTextures) ]];
} RegularTextures;
The program we're trying to set up looks like this:
vertex ProjVertexTriB vertexTri_multiTex(
VertexTriB vert [[stage_in]],
constant Uniforms &uniforms [[ buffer(WKSVertUniformArgBuffer) ]],
constant Lighting &lighting [[ buffer(WKSVertLightingArgBuffer) ]],
constant VertexTriArgBufferB & vertArgs [[buffer(WKSVertexArgBuffer)]],
constant RegularTextures & texArgs [[buffer(WKSVertTextureArgBuffer)]])
{
// Do things
}
Fairly benign as these things go.
Even more curiously, a different program with the same RegularTextures argument buffer is sometimes set up first without complaint.
I strongly suspect Apple introduced a bug here, but with the impending release, we're just trying to figure out how to work around it.
So there's a "grounding shadow" component we can add to any entity in Reality Composer Pro.
In case my use case is Apple Vision Pro + Reality Kit:
I'm wondering, by default I think I'd want to add this to any entity that's a ModelEntity in my scene... right?
Should we just add this component once to the root transform?
Or should we add it to any entity individually if it's a model entity?
Or should we not add this at all? Will RealityKit do it for us?
Or does it also depend if we use a volume or a full space?
InstallAware Software has published an open source GUI to automate all of the command line chores involved in getting Apple's Game Porting Toolkit up and running:
https://github.com/installaware/AGPT
Features:
Uninstalls Homebrew for Apple Silicon if it is already present (as mandated by the Apple Game Porting Toolkit)
Installs Homebrew x86_64
Installs Wine
Configures Wine settings for Windows
(optional) Copies over Apple Game Porting Toolkit binaries to accelerate DirectX 12 games running on Apple Silicon when the DMG download from Apple Developer is locally available
(optional) Installs the manually supplied version of Xcode Command Line Tools when the DMG download from Apple Developer is locally available (Xcode Command Line Tools are automatically set up by Homebrew unless you're running macOS Sonoma)
Installs arbitrary Windows software, including games
Runs previously installed Windows Software, including Wine supplied default tools (File Manager, Registry Editor, "reboot" tool, App Uninstaller, Task Manager, Notepad, Wordpad, Internet Browser); supports passing custom command line parameters to launched apps
Does not require macOS Sonoma or Apple Silicon
Supports Apple Intel hardware and earlier macOS versions through standard Wine functionality (running 3D games at high performance using your non-Apple Silicon embedded/dedicated GPUs)
In addition to locally downloading and building the sources yourself using the repository above (and creating your own forks), you may also download a pre-built DMG, notarized by Apple and thus safe and free of security limitations - providing you with a single-click experience to run any Windows software on your Mac today:
https://www.installaware.com/iamp/agpt.dmg