Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

is the MPSDynamicScene example correctly computing the motion vector texture?
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene In that example, it computes motion vectors in UV coordinates, resulting in very small values: // Compute motion vectors if (uniforms.frameIndex > 0) { // Map current pixel location to 0..1 float2 uv = in.position.xy / float2(uniforms.width, uniforms.height); // Unproject the position from the previous frame then transform it from // NDC space to 0..1 float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f; // Next, remove the jittering which was applied for antialiasing from both // sets of coordinates uv -= uniforms.jitter; prevUV -= prevUniforms.jitter; // Then the motion vector is simply the difference between the two motionVector = uv - prevUV; } Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels: The motion vector texture must be at least a two channel texture representing how many texels * each texel in the source image(s) have moved since the previous frame. The remaining channels * will be ignored if present. This texture may be nil, in which case the motion vector is assumed * to be zero, which is suitable for static images. Is this a mistake in the example code? Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome. Update: If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason. I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
0
0
572
Aug ’23
Maximize memory read bandwidth on M1 Ultra/M2 Ultra
I am in the process of developing a matrix-vector multiplication kernel. While conducting performance evaluations, I've noticed that on M1/M1 Pro/M1 Max, the kernel demonstrates an impressive memory bandwidth utilization of around 90%. However, when executed on the M1 Ultra/M2 Ultra, this figure drops to approximately 65%. My suspicion is that this discrepancy is attributed to the dual-die architecture of the M1 Ultra/M2 Ultra. It's plausible that the necessary data might be stored within the L2 cache of the alternate die. Could you kindly provide any insights or recommendations for mitigating the occurrence of on-die L2 cache misses on the Ultra chips? Additionally, I would greatly appreciate any general advice aimed at enhancing memory load speeds on these particular chips.
0
0
648
Aug ’23
Where can I find software developers for Vision Pros software?
My name is Leuy, a sophomore at the Wharton School of Business, with a passion for entrepreneurship and a strong belief in the potential of VR and AR technologies to reshape our family interactions. I'm currently working on a groundbreaking startup that aims to create a family-oriented co-working and co-learning platform. The essence of my vision is to help busy working parents spend quality time with their kids using virtual reality (VR) and augmented reality (AR) on Apple's vision pros. Do you know where I can find the best software developers to help bring my vision to life? Thanks.
3
1
956
Aug ’23
How to get external display information after choosing use as separate as display via screen mirroring on MacOS?
I would like to get some information of the connected display such as vendor number, eisaId, … after connecting the external display via “screen mirroring” -> “use as Separate Display” When the same display was connected through HDMI port or extend mode in screen mirroring, the information is not identical: HDMI Other display found - ID: 19241XXXX, Name: YYYY (Vendor: 19ZZZ, Model: 57WWW) Screen mirroring - extend mode Other display found - ID: 41288XX, Name: AAA (Vendor: 163ZYYBBB, Model: 16ZZWWYYY) I tried to get display information with the below method. func configureDisplays() { var onlineDisplayIDs = [CGDirectDisplayID](repeating: 0, count: 16) var displayCount: UInt32 = 0 guard CGGetOnlineDisplayList(16, &onlineDisplayIDs, &displayCount) == .success else { os_log("Unable to get display list.", type: .info) return } for onlineDisplayID in onlineDisplayIDs where onlineDisplayID != 0 { let name = DisplayManager.getDisplayNameByID(displayID: onlineDisplayID) let id = onlineDisplayID let vendorNumber = CGDisplayVendorNumber(onlineDisplayID) let modelNumber = CGDisplayModelNumber(onlineDisplayID) let serialNumber = CGDisplaySerialNumber(onlineDisplayID) if !DEBUG_SW, DisplayManager.isAppleDisplay(displayID: onlineDisplayID) { let appleDisplay = AppleDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Apple display found - %{public}@", type: .info, "ID: \(appleDisplay.identifier), Name: \(appleDisplay.name) (Vendor: \(appleDisplay.vendorNumber ?? 0), Model: \(appleDisplay.modelNumber ?? 0))") } else { let otherDisplay = OtherDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Other display found - %{public}@", type: .info, "ID: \(otherDisplay.identifier), Name: \(otherDisplay.name) (Vendor: \(otherDisplay.vendorNumber ?? 0), Model: \(otherDisplay.modelNumber ?? 0))") } } } Can we have the same display information when connect with an external display via HDMI port and extend mode in Screen Mirroring?
0
0
488
Aug ’23
SKTexture renders SF Symbols image always black
Hi, I'm creating a SF Symbols image like this: var img = UIImage(systemName: "x.circle" ,withConfiguration: symbolConfig)!.withTintColor(.red) In the debugger the image is really red. and I'm using this image to create a SKTexture: let shuffleTexture = SKTexture(image: img) The texture image is ALWAYS black and I have no idea how to change it's color. Nothing I've tried so far works. Any ideas how to solve this? Thank you! Best Regards, Frank
1
0
714
Aug ’23
Unknown AMD Radeon driver version
When I try to launch Star Wars Battlefront II 2017 I get an error that my AMD Radeon driver version is unknown. I found a solution for dxvk. Add: dxgi.customDeviceId = 10de dxgi.customVendorId = 1c06 into dxvk.conf. How can I fix that using Game Porting Toolkit? Screenshot - https://drive.google.com/file/d/1ueSDesizkJDcc6XufyB295k4JqhEL-bb/view?usp=share_link
0
0
595
Aug ’23
MetalFX sample code does not support TemporalScaler
I use Macmini with MacOS Ventura 13.3.1, while the Mac running MetalFX sample code, and choose Temporal Scaler, makeTemporalScaler return nil value, and print "The temporal scaler effect is not usable!". If i choose SpatialScaler, it is ok. guard let temporalScaler = desc.makeTemporalScaler(device: device) else { print("The temporal scaler effect is not usable!") mfxScalingMode = .defaultScaling return } Sample code: https://developer.apple.com/documentation/metalfx/applying_temporal_antialiasing_and_upscaling_using_metalfx?language=objc
1
0
554
Aug ’23
Reset GameCenter scores
Is it possible to remove/reset GmeCenter scores for my game app? I'd like to remove all scores so new users can start to compete. Is this possible? The games are educationally based and I'fd like to reset score for a new class.
0
0
435
Aug ’23
Understanding MaterialX, USDShaders and Material workflows from Blender and other tools
Hi, I've been exploring a project with visionOS, and have been quite confused on capabilities and workflows for using custom materials in RealityKit & RealityComposerPro for visionOS. Ideally I would be able to create / load / modify a model and its materials in Blender, export to openUSD and have it load in fully in RCP, but this hasn't been the case. Instead different aspects of the material don't seem to be exported correctly and that has lead me to investigate more into understanding MaterialX, openUSD and Metal, and how they work in visionOS, RealityKit and RealityComposer. MaterialX was announced as a primary format for working with 3D materials, but the .mtlx file format doesn't appear to be compatible in RCP directly - specifically trying materials provided in the AMD OpenGPU MaterialX Library. (note: AFAIK, Blender does not currently support MaterialX) When downloading a material, this provides a folder with the textures and corresponding .mtlx, but currently in RCP (Xcode 15.6 beta) this file is ignored. Similarly, trying to load it using ShaderGraphMaterial fails with 'Error in prim' and no other details that I can see. It also appears that there is a way of bundling MaterialX files within an openUSD file (especially implied by the error about Prims), but I haven't been able to understand how this can be done, or if this is the correct approach. Unpacking the Apple-provided materials in RCP from usdz to usda, these appear to define the shaders in openUSD and reference the RCP MaterialX Preview Shader (presumably created using the Shader Graph). There is also reference however from the official MaterialX.org and OpenUSD around using a USD / MaterialX Plugin to enable compatibility. I've also tried, and followed along with the introductory tutorial on the in-built ShaderGraph, and find it difficult to understand and quite different from Blender's Shader Nodes, but it currently appears that this is the primary way promoted to create and work with materials. Finally, I had expected that CustomMaterials using Metal Shaders would be available, as Metal was mentioned for Fully Immersive Spaces, and 'Explore Advanced Rendering with RealityKit 2' from WWDC21 covers using custom shaders, but this is not listed as included in visionOS and according to an answer here, it's not currently planned. (although the documentation here still mentions Metal briefly) Overall, what are the suggestions for workflows with materials for RealityKit on visionOS? Is there a fully compatible path from Blender -> openUSD -> RealtyComposerPro? Do I need to export materials and models from Blender individually and rebuild them in RCP using the ShaderGraph? Can I utilise existing MaterialX materials in RealityComposerPro, and if so, how? Are there any other good resources for getting comfortable and understanding the nodes within the ShaderGraph? what WWDC talks would be good to revise on this? Really appreciate any guidance!
3
1
2.4k
Aug ’23
Wrong platform and OS detection from D3DMetal beta3
Using GPTK beta3, when launching steam from Sonoma b5 VM (launched from latest UTM 4.3.5) is says : D3DM: D3DMetal requires Apple silicon and macOS 14.0 Sonoma or higher command used to launch steam: gameportingtoolkit ~/my-game-prefix 'C:\Program Files (x86)\Steam\steam.exe' GPTK was compiled/installed fine using x86 homebrew and Xcode 15b6 command line tools. gameportingtoolkit have been copied to /usr/local/bin as to unmount GPTK image. On a M2 Pro 12 CPU/19GPU Mac mini, 32GB. (8 performance core and 20 GB ram allocated to the VM)
3
0
1.6k
Aug ’23
RealityView attachments do not show up in Vision Pro simulator
I have added an attachments closure in an RealityView as outlined in WWDC session "Enhance your spatial computing app with RealityKit" but it's not showing up - neither in Xcode preview window nor in Vision Pro simulator. I have used example code 1:1, however, I had to load the entity async with "try? await" to satisfy the compiler. Any help is appreciated, thx in advance!
5
0
1.6k
Aug ’23
Why my neural network is slower on MPS(Apple Silicon) than on CPU
with my MacBook m2. The code works correctly both on CPU and GPU, but the speed on GPU is much slower! I have loaded my statistic and my model on GPU, and it seemed to work. /Users/guoyijun/Desktop/iShot_2023-08-20_09.57.41.png I printed my code runtime. when the following function "train" is called, the loop speed among them runs extraordinarily slow. def train(net, device, train_features, train_labels, test_features, test_labels, num_epochs, learning_rate, weight_decay, batch_size): train_ls, test_ls = [], [] train_iter = d2l.load_array((train_features, train_labels), batch_size, device) # Adam optimizer = torch.optim.Adam(net.parameters(), lr = learning_rate, weight_decay = weight_decay) for epoch in range(num_epochs): for X, y in train_iter: optimizer.zero_grad() l = loss(net(X), y) l.backward() optimizer.step() # train_ls.append(log_rmse(net, train_features, train_labels)) return train_ls, test_ls
0
0
686
Aug ’23
RealityView is not responding to tap gesture
Hello, I have created a view with a 360 image full view, and I need to perform a task when the user clicks anywhere on the screen (leave the dome), but no matter what I try, it just does not work, it doesn't print anything at all. import SwiftUI import RealityKit import RealityKitContent struct StreetWalk: View { @Binding var threeSixtyImage: String @Binding var isExitFaded: Bool var body: some View { RealityView { content in // Create a material with a 360 image guard let url = Bundle.main.url(forResource: threeSixtyImage, withExtension: "jpeg"), let resource = try? await TextureResource(contentsOf: url) else { // If the asset isn't available, something is wrong with the app. fatalError("Unable to load starfield texture.") } var material = UnlitMaterial() material.color = .init(texture: .init(resource)) // Attach the material to a large sphere. let streeDome = Entity() streeDome.name = "streetDome" streeDome.components.set(ModelComponent( mesh: .generatePlane(width: 1000, depth: 1000), materials: [material] )) // Ensure the texture image points inward at the viewer. streeDome.scale *= .init(x: -1, y: 1, z: 1) content.add(streeDome) } update: { updatedContent in // Create a material with a 360 image guard let url = Bundle.main.url(forResource: threeSixtyImage, withExtension: "jpeg"), let resource = try? TextureResource.load(contentsOf: url) else { // If the asset isn't available, something is wrong with the app. fatalError("Unable to load starfield texture.") } var material = UnlitMaterial() material.color = .init(texture: .init(resource)) updatedContent.entities.first?.components.set(ModelComponent( mesh: .generateSphere(radius: 1000), materials: [material] )) } .gesture(tap) } var tap: some Gesture { SpatialTapGesture().targetedToAnyEntity().onChanged{ value in // Access the tapped entity here. print(value.entity) print("maybe you can tap the dome") // isExitFaded.toggle() } }
1
0
1.4k
Aug ’23
Which attribute should I use to know which eye in the fragment shader?
Hi, I have an app based on Metal and it runs on VisionOS. It has a huge sphere mesh and renders video outputs (from AVPlayer) on it. What I want to do is rendering left portion of my video output on left eye, and right portion of my video output on right eye. In my fragment shader, I think I need to know that the thread in shader is for left eye or right eye. (I'm not using MV-hevc encoded video but just hevc encoded one) So, what I currently do is, I assume 'amplification_id' is for the thing which determines the side of eyes. But, I'm not sure this is correct.
2
0
664
Aug ’23
Video Passthrough with Compositor Services and Metal on visionOS
I just created my first Compositor Services/Metal project for visionOS. I was surprised when I ran it in the simulator that the room wasn't visible. Looking through the Compositor Services API, there doesn't seem to be a way to enable passthrough video. If that's true it means there's no way to create a mixed immersive space using Compositor Services and Metal. And if that's true, it would also apply to game engines like Unity that use those APIs to support immersive spaces. (I'm aware that Unity also has a feature that allows it to render using RealityKit, but I'm referring to full screen apps using features like custom shaders.) Does this mean that apps created with Compositor Services and Metal are VR-only? If so, is that the way things are going to be for 1.0? And if that's so, are there any plans to allow compositing with the passthrough video in a future release? I hope I'm overlooking something obvious. Thanks in advance.
1
1
784
Aug ’23