Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Posts under Metal tag

140 Posts

Post

Replies

Boosts

Views

Activity

tensorflow-metal fails with tensorflow > 2.18.1
Also submitted as feedback (ID: FB20612561). Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1 In a new python 3.12 virtual environment: pip install tensorflow pip install tensor flow-metal python -c "import tensorflow as tf" Prints error: Traceback (most recent call last): File "", line 1, in File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
6
4
2.3k
Nov ’25
Deterministic RNG behaviour across Mac M1 CPU and Metal GPU – BigCrush pass & structural diagnostics
Hello, I am currently working on a research project under ENINCA Consulting, focused on advanced diagnostic tools for pseudorandom number generators (structural metrics, multi-seed stability, cross-architecture reproducibility, and complementary indicators to TestU01). To validate this diagnostic framework, I prototyped a small non-linear 64-bit PRNG (not as a goal in itself, but simply as a vehicle to test the methodology). During these evaluations, I observed something interesting on Apple Silicon (Mac M1): • bit-exact reproducibility between M1 ARM CPU and M1 Metal GPU, • full BigCrush pass on both CPU and Metal backends, • excellent p-values, • stable behaviour across multiple seeds and runs. This was not the intended objective, the goal was mainly to validate the diagnostic concepts, but these results raised some questions about deterministic compute behaviour in Metal. My question: Is there any official guidance on achieving (or expecting) deterministic RNG or compute behaviour across CPU ↔ Metal GPU on Apple Silicon? More specifically: • Are deterministic compute kernels expected or guaranteed on Metal for scientific workloads? • Are there recommended patterns or best practices to ensure reproducibility across GPU generations (M1 → M2 → M3 → M4)? • Are there known Metal features that can introduce non-determinism? I am not sharing the internal recurrence (this work is proprietary), but I can discuss the high-level diagnostic observations if helpful. Thank you for any insight, very interested in how the Metal engineering team views deterministic compute patterns on Apple Silicon. Pascal ENINCA Consulting
0
0
238
Nov ’25
I built apple.PHASE with Unity and targeted with visionOS, but Reverb does not sound.
Environment Versions ・macOS15.6.1 ・visionOS26.0.1 ・Xcode16.1 or 26.0.1 ・unity6000.2.9f1 ・Apple.core3.2.0 ・Apple.PHASE1.2.7 ・polyspatial2.4.2 With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted. What is required to make Early Reflection and Late Reverb take effect on a visionOS device build? action taken ・created a SoundEvent. ・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer. ・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb. ・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None. ・in project settings > Audio, set Spatializer plugin to PHASE Spatializer. ・from there, build for visionOS.
0
0
780
Nov ’25
RenderBox Framework Warning
Unable to open mach-O at path: /AppleInternal/Library/BuildRoots/4~B5FIugA1pgyNPFl0-ZGG8fewoBL0-6a_xWhpzsk/Library/Caches/com.apple.xbs/Binaries/RenderBox/install/TempContent/Root/System/Library/PrivateFrameworks/RenderBox.framework/Versions/A/Resources/default.metallib Error:2 This happens only on macOS Sequoia - not on macOS Tahoe. I have got a noticeable amount of lag in the animations of my App where this Warning arises I've tried to isolate the respective animations from the main thread too - still getting the same issue with the lag Is it possible to resolve it, as I want backwards compatibility with my app for the users
0
0
100
Nov ’25
Can MPSGraphExecutable automatically leverage Apple Neural Engine (ANE) for inference?
Hi, I'm currently using Metal Performance Shaders Graph (MPSGraphExecutable) to run neural network inference operations as part of a metal rendering pipeline. I also tried to profile the usage of neural engine when running inference using MPSGraphExecutable but the graph shows no sign of neural engine usage. However, when I used the coreML model inspection tool in xcode and run performance report, it was able to use ANE. Does MPSGraphExecutable automatically utilize the Apple Neural Engine (ANE) when running inference operations, or does it only execute on GPU? My model (Core ML Package) was converted from a pytouch model using coremltools with ML program type and support iOS17.0+. Any insights or documentation references would be greatly appreciated!
0
0
459
Nov ’25
Metal is not installed on Xcode 26 on Xcode Cloud
Hi there, We’re encountering this error in all of our builds when using the latest Xcode and macOS: The Metal Toolchain was not installed and could not compile the Metal source files. Download the Metal Toolchain from Xcode > Settings > Components and try again. In short, all builds are failing. I’ve tried fixing this by installing Metal and applying other solutions, but none of them worked reliably. Is there a way to ensure that the Metal Toolchain is installed on the CI machine?
8
6
994
Nov ’25
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
0
0
200
Nov ’25
RealityView postProcess effect depth texture
Hello, Question re: iOS RealityView postProcess. I've got a working postProcess kernel and I'd like to add some depth-based effects to it. Theoretically I should be able to just do: encoder.setTexture(context.sourceDepthTexture, index: 1) and then in the kernel: texture2d<float, access::read> depthIn [[texture(1)]] ... outTexture.write(depthIn.read(gid), gid); And I consistently see all black rendered to the view. The postProcess shader works, so that's not the issue. It just seems to not be receiving actual depth information. (If I set a breakpoint at the encoder setTexture step, I can see preview the color texture of the scene, but the context's depthTexture looks like all NaN / blank.) I've looked at all the WWDC samples, but they include ARView for all the depth sample code, which has a different set of configuration options than RealityView. So far I haven't seen anywhere to explicitly tell RealityView "include the depth information". So I'm not sure if I'm missing something there. It appears that there is indeed a depth texture being passed, but it looks blank. Is there a working example somewhere that we can reference?
2
0
621
Nov ’25
How-to highlight people in a Vision Pro app using Compositor Services
Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? My goal is to highlight people in a Vision Pro app using Compositor Services. To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider: minFrameDuration: 0.03 maxFrameDuration: 0.033333335 frameSize: (1920.0, 1080.0) pixelFormat: 875704422 cameraType: main cameraPositions: [left, right] cameraRectification: mono From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer). I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them: // NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake. var request = VNGeneratePersonSegmentationRequest() request.qualityLevel = .balanced request.outputPixelFormat = kCVPixelFormatType_OneComponent8 ... let lHandler = VNSequenceRequestHandler() let rHandler = VNSequenceRequestHandler() ... func processBuffers() async { try lHandler.perform([request], on: lBuffer) guard let lMask = request.results?.first?.pixelBuffer else {...} try rHandler.perform([request], on: rBuffer) guard let rMask = request.results?.first?.pixelBuffer else {...} appModel.latestPersonMasks = (lMask, rMask) } I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks: width (in pixels) = 512 height (in pixels) = 384 byters per row = 512 plane count = 0 pixel format type = 1278226488 I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough. Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline). fragment float4 tintFragmentShader( TintInOut in [[stage_in]], ushort amp_id [[amplification_id]], texture2d<uint> leftMask [[texture(0)]], texture2d<uint> rightMask [[texture(1)]] ) { if (in.color.a <= 0.0) { discard_fragment(); } float2 uv; if (amp_id == 0) { // LEFT uv = ??????????????????????; } else { // RIGHT uv = ??????????????????????; } constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear); // Sample the PersonSegmentation grayscale mask float maskValue = 0.0; if (amp_id == 0) { // LEFT if (leftMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } else { // RIGHT if (rightMask.get_width() > 0) { maskValue = rightMask.sample(linearSampler, uv).r; } } if (maskValue > 250) { return (1.0, 1.0, 1.0, 0.5) } return in.color; } I need to correctly sample the masks for a given fragment. The LayerRenderer.Layout is set to .layered. From Developer Documentation. A layout that specifies each view’s content as a slice of a single texture. Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56. The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9). The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3). All of these aspect ratios are different. My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks? Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
1
0
448
Nov ’25
Help Request! How to Render Models with SubMeshes Using Metal 4?
Hi, I'm Beginner with Metal 4 and Model I/O 🥺. I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen. Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup. Here's what I have so far: https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
1
0
276
Nov ’25
Cannot load .mtlpackage to MTLLibrary
After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled. However, I am now stuck on creating a MTLLibrary with .mtlpackage. Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550: let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")! let coreMLURL = URL(string: coreMLFilePath)! do { metalDevice.makeLibrary(URL: coreMLURL) } catch { print("error: \(error)") } With the above code, I am getting error: Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package} What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this? I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
0
0
238
Nov ’25
Are there complete code examples available for “Combine Metal 4 machine learning and graphics”?
Hello, I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder. While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4. Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to: Export a model to an .mlpackage, Convert it to an .mtlpackage, Use MTL4MachineLearningCommandEncoder alongside render passes, Or embed small neural networks directly in shaders using Shader ML? Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently. Thank you very much for your time and support! Best regards,
4
2
953
Nov ’25
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
1
1
406
Nov ’25
Unable to compile Core Image filter on Xcode 26 due to missing Metal toolchain
I have a Core Image filter in my app that uses Metal. I cannot compile it because it complains that the executable tool metal is not available, but I have installed it in Xcode. If I go to the "Components" section of Xcode Settings, it shows it as downloaded. And if I run the suggested command, it also shows it as installed. Any advice? Xcode Version Version 26.0 beta (17A5241e) Build Output Showing All Errors Only Build target Lessons of project StudyJapanese with configuration Light RuleScriptExecution /Users/chris/Library/Developer/Xcode/DerivedData/StudyJapanese-glbneyedpsgxhscqueifpekwaofk/Build/Intermediates.noindex/StudyJapanese.build/Light-iphonesimulator/Lessons.build/DerivedSources/OtsuThresholdKernel.ci.air /Users/chris/Code/SerpentiSei/Shared/iOS/CoreImage/OtsuThresholdKernel.ci.metal normal undefined_arch (in target 'Lessons' from project 'StudyJapanese') cd /Users/chris/Code/SerpentiSei/StudyJapanese /bin/sh -c xcrun\ metal\ -w\ -c\ -fcikernel\ \"\$\{INPUT_FILE_PATH\}\"\ -o\ \"\$\{SCRIPT_OUTPUT_FILE_0\}\"' ' error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain /Users/chris/Code/SerpentiSei/StudyJapanese/error:1:1: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain Build failed 6/9/25, 8:31 PM 27.1 seconds Result of xcodebuild -downloadComponent MetalToolchain (after switching Xcode-beta.app with xcode-select) xcodebuild -downloadComponent MetalToolchain Beginning asset download... Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/4d77809b60771042e514cfcf39662c6d1c195f7d.asset/AssetData/Restore/022-19457-035.dmg Done downloading: Metal Toolchain (17A5241c). Screenshots from Xcode Result of "Copy Information" Metal Toolchain 26.0 [com.apple.MobileAsset.MetalToolchain: 17.0 (17A5241c)] (Installed)
25
0
3.4k
Oct ’25
Xcode_26 not compiling Metal project
Hello Xcode 26.0.1 (17A400) Missing some Metal components When building a program using Metal, it induces an unexpected error : “error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain Command CompileMetalFile failed with a nonzero exit code” Which terminates the build The fix given “xcodebuild -downloadComponent MetalToolchain” using sudo does not work Did someone find a work around or could resolve the issue? Many thanks Jean MacBook Air M4; macOS 26.0.1; Xcode 26.0.1
3
2
324
Oct ’25
Metal recommendedMaxWorkingSetSize vs actual RAM on iPhone (LLM load fails)
Context I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization. Environment Device: iPhone Air (12 GB RAM) iOS: 26 Xcode: 26.0.1 Build: Metal backend enabled llama.cpp App runs on device (not Simulator) What I’m seeing MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance. Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed). Questions Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone? What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM) Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior? Can a process practically exceed this for long-lived buffers without immediate Jetsam risk? Any guidance for LLM scenarios near the limit?
0
0
543
Oct ’25
realitytool requires Metal for this operation and it is not available in this build environment
Hello, I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight. Somewhere very late in the build process I get the following error: realitytool requires Metal for this operation and it is not available in this build environment The log says this happens at: Compile Skybox urban.skybox My project uses RealityKit. How can I fix this issue? Thanks!
5
5
970
Oct ’25
Help Configuring Unity for Immersive VR on Vision Pro with Pinch Teleport
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me). So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically: visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport. XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement. visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion. I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code. Please include steps for: Setting up immersive VR (disabling spatial defaults if needed). Integrating pinch detection with ray-based teleport. Any config changes or basic scripts. Project Configuration: Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69) Installed Packages: Apple visionOS XR Plugin: 2.3.1 AR Foundation: 6.2.0 PolySpatial XR: 2.3.1 XR Core Utilities: 2.5.3 XR Hands: 1.6.1 XR Interaction Toolkit: 3.2.1 XR Legacy Input Helpers: 2.1.12 XR Plugin Management: 4.5.1 Imported Samples: Apple visionOS XR Plugin 2.3.1: Metal Sample - URP XR Hands 1.6.1 XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS Build Platform Settings: Target: Apple visionOS App Mode: Metal Rendering with Compositor Services Selected Validation Profiles: visionOS Metal Documentation: Enabled Xcode Version: 26.01 visionOS SDK: 26 Mac Hardware: Apple M1 Max Target visionOS Version: 20 or 26 Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max No errors in builds so far; just missing the desired functionality. Thanks for a complete response with actionable steps.
0
0
252
Oct ’25
Metal CIKernel instances with arbitrarily structured data arguments
Hi, In the iOS13 and macOS Catalina release notes it says: Metal CIKernel instances now support arguments with arbitrarily structured data. I've been trying to use this functionality in a CIKernel with mixed results. I'm particularly interested in passing data in the form of a dynamically sized array. It seems to work up to a certain size. Beyond the threshold excessive data is discarded and the kernel becomes unstable. I assume there is some kind of memory alignment issue going on, but I've tried various types in my array and always get a similar result. I have not found any documentation or sample code regarding this. It would be great to know how this is intended to work and what the limitations are. In the forums there are two similar unanswered questions about data arguments, so I'm sure there are a few out there with similar issues. Thanks! Michael
5
0
476
Oct ’25