In RealityComposerPro, I've set up a Custom Material that receives an Image File as an input. When I manually select an image and upload it to RealityComposerPro as the input value, I'm able to easily drive the surface of my object/scene with this image.
However, I am unable to drive the value of this "cover" parameter via shaderGraphMaterial.setParameter(name: , value: ) in Swift since there is no way to supply an Image as a value of type MaterialParameters.Value.
When I print out shaderGraphMaterials.parameterNames I see both "color" and "cover", so I know this parameter is exposed.
Is this a feature that will be supported soon / is there a workaround? I assume that if something can be created as an input to Custom Material (in this case an Image File), there should be an equivalent way to drive it via Swift.
Thanks!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
About the problem
I built a Unity project for the visionOS platform and opened it on Xcode-beta 4.
When I tried to run the project, the build failed and the errors following appeared:
Errors
Building for 'xrOS-simulator', but linking in object file (/<Path to my Unity Xcode project>/Libraries/libiPhone-lib.a[5](MeshSkinningNEON64_blzfc.o)) built for 'xrOS'
Linker command failed with exit code 1 (use -v to see invocation)
I have no idea what causes the issue and how to solve it.
Is there anybody to find the cause and the solution for the problem?
Note
Unity has nothing to do with the matter, I guess, because the build on Unity is always successfully completed.
Environment
Unity 2022.3.5f1 that has visionOS Build, iOS Build, and Mac Build modules.
Xcode-beta 4
I have a HealthComponent and a HealthSystem which decreases the health value for all Entities with the HealthComponent every 2 seconds in the update loop of the system.
Now if, in a SwiftUI view, I want to show the current health value and bind to it, how would I do this?
Ive tried adding the entity to my Character struct bit that obviously doesn’t update by itself.
It appears that some of the jax core functions (in pjit, mlir) are not supported. Is this something to be supported in the future?
For example, when I tested a diffrax example,
from diffrax import diffeqsolve, ODETerm, Dopri5
import jax.numpy as jnp
def f(t, y, args):
return -y
term = ODETerm(f)
solver = Dopri5()
y0 = jnp.array([2., 3.])
solution = diffeqsolve(term, solver, t0=0, t1=1, dt0=0.1, y0=y0)
It generates an error saying EmitPythonCallback is not supported in metal.
File ~/anaconda3/envs/jax-metal-0410/lib/python3.10/site-packages/jax/_src/interpreters/mlir.py:1787 in emit_python_callback
raise ValueError(
ValueError: `EmitPythonCallback` not supported on METAL backend.
I uderstand that, currently, no M1 or M2 chips have multiple devices or can be arranged like that. Therefore, it may not be necessary to fully implement p*** functions (pmap, pjit, etc). But some powerful libraries use them. So, it would be great if at least some workaround for core functions are implemented.
Or is there any easy fix for this?
I am trying to follow the WWDC 2022 video Explore USD tools and rendering: https://developer.apple.com/videos/play/wwdc2022/10141/
I followed the steps here to create an Xcode project that uses OpenUSD to load scenes. https://developer.apple.com/documentation/metal/metal_sample_code_library/creating_a_3d_application_with_hydra_rendering?language=objc
After installing OpenUSD and generating an Xcode project, I opened Xcode, set the scheme to hydraplayer and clicked the build button. The code compiles but fails to link with a bunch of undefined symbols errors like this one:
Undefined symbol: pxrInternal_v0_23__pxrReserved__::GfMatrix4d::SetDiagonal(double)
I tried to tag this post wwdc2022-10141, but the tag was not found so I tagged a related session.
Hello , I just wonder do I know it's quite specific but I think this cloud be useful as least for me.
I've build the Apple Unity plugins and added them to my Unity project. I've also loaded up the Apple.GameKit 1.0.4 GameKit Demo Scene.
When I run this scene in the Unity editor, I get the prompt to select my Game Center account, and the username appears in the Editor Game window UI. From here, the other on-screen buttons work as well, I can pull up the Game Center Achievements window or the Realtime Matchmaking UI.
If I build this project for macOS, and run the resulting app on the Mac, it isn't working. On the game UI, the "Local Player Name" text never updates to show the username of my GameCenter account. None of the on screen buttons work, nothing happens when clicked.
If I build from unity for macOS with debug on, I can attach a debugger while running the app. If I do this, and add breakpoints, it seems that the application hits the first line from the Start() function of GameKitSample.cs :
_localPlayer = await GKLocalPlayer.Authenticate();
But execution never seems to get past this point. It also doesn't seem to throw any exception. It just gets stuck here and I can't use any Game Kit features in the sample scene.
What is missing? The same code seems to interact with Game Center just fine while running from the Unity Editor.
When using Xcode 15 Beta 5, a project created with Xcode 15 Beta 3 seemingly fails to open any scene within the RealityKitContentBundle anymore while a newly created one works flawlessly.
How can I reregister my bundle?
Hi! I created manually USDZ with one cube to anchoring on wall (vertical plane)
#usda 1.0
(
defaultPrim = "Root"
metersPerUnit = 1
upAxis = "Y"
)
def Xform "Root" (
assetInfo = {
string name = "Root"
}
kind = "component"
)
{
def Xform "Geom" (
prepend apiSchemas = [ "Preliminary_AnchoringAPI" ]
)
{
# token preliminary:anchoring:type = "plane"
# token preliminary:planeAnchoring:alignment = "vertical"
matrix4d xformOp:transform = (
(0, 0, 0, 0),
(0, 0, 0, 0),
(0, 0, 0, 0),
(0, 0, 0.2, 1) )
def Xform "Group"
{
def Cube "cube_0"
{
float3[] extent = [(-1, -1, -1), (1, 1, 1)]
uniform bool doubleSided = 1
rel material:binding = </Root/Materials/material_0>
matrix4d xformOp:transform = (
(0.1, 0, 0, 0),
(0, 0.1, 0, 0),
(0, 0, 0.01, 0),
(0, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
}
}
}
It's displayed correct in ARQuickLook
When I add second cube to USD(z)
#usda 1.0
(
defaultPrim = "Root"
metersPerUnit = 1
upAxis = "Y"
)
def Xform "Root" (
assetInfo = {
string name = "Root"
}
kind = "component"
)
{
def Xform "Geom" (
prepend apiSchemas = [ "Preliminary_AnchoringAPI" ]
)
{
# token preliminary:anchoring:type = "plane"
# token preliminary:planeAnchoring:alignment = "vertical"
matrix4d xformOp:transform = (
(0, 0, 0, 0),
(0, 0, 0, 0),
(0, 0, 0, 0),
(0, 0, 0.2, 1) )
def Xform "Group"
{
def Cube "cube_0"
{
float3[] extent = [(-1, -1, -1), (1, 1, 1)]
uniform bool doubleSided = 1
rel material:binding = </Root/Materials/material_0>
matrix4d xformOp:transform = (
(0.1, 0, 0, 0),
(0, 0.1, 0, 0),
(0, 0, 0.01, 0),
(0, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
def Cube "cube_1"
{
float3[] extent = [(-1, -1, -1), (1, 1, 1)]
uniform bool doubleSided = 1
rel material:binding = </Root/Materials/material_0>
matrix4d xformOp:transform = (
(0.1, 0, 0, 0),
(0, 0.1, 0, 0),
(0, 0, 0.01, 0),
(0.3, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
}
}
}
ARQuickLook display scene ~ 10 cm from wall, and then scene have more cubes we see increased distance from wall. Here for two cubes
I also tryed recreate scene in Reality Composer on iPhone. Everything ok for one cube, and ok when preview in App(ARKit?) for two cubes, but when export scene in RealityComposer macOS to USDZ we again see wrong distance for two cubes and more.
For tests I use iPhone 13 Pro Max with iOS 16.3.1
I am new to Metal. I need to port OpenCL compute shader programs to Metal compute shaders. I am having trouble in finding sample codes in Metal and Swift, Objective-C. I can see examples with GPU buffer objects only. As in the following OpenCL shader function, I need to pass uniform constant float and integer values along with GPU buffer pointers. I only use compute shaders.
__kernel void testfunction (
float ratio1,
int opr1,
int opr2,
__global float *INPUT1,
__global float *INPUT2,
__global float *OUTPUT
} {
int peIndex = get_global_id(0);
// main compute block
}
How can I code these in Metal? And how can I set/pass these parameter values in Swift and Objective-c main programs?
Hello,
I have created a material in RCP and applied it via "Material Binding" to 3D geometry within my scene... however, when I export it out as a USDz file and open in Xcode... the material changes have not persisted.
Any suggestions on why this is so?
Hello,
I am trying to use gpu for machine learning task from apple using "mps" as device for GPU but it is not working. I am using PyTorch Stable version. How can I use MacBook GPU for machine learning tasks?
I'm trying to compare two allocation schemes in my Metal renderer:
allocate separate MTLBuffers out of an MTLHeap
allocate one full-size MTLBuffer from an MTLHeap, then doing my own suballocating and tracking the offsets
The first is straightforward and I get a nice overview of everything in the Xcode Metal debugger. For the second, I use addDebugMarker:range: to label each of my custom buffers. I have looked everywhere and can't see where my debug labels are supposed to appear in the debugger. The memory overview only shows the one MTLBuffer that spans the entire MTLHeap. My renderpass works as expected, but the command list and resource views only reference the single MTLBuffer as opposed to the tagged ranges.
What am I missing?
Is this a know bug or is there a funamental misunderstanding on my part?
In the screenshot I've attached below I would expect the blue box to be perpendicular to the floor. It is the yAxisEntity in my code which I instatniate with a mesh of height 3. Instead it runs parallel to the floor what I'd expect to the z axis.
Here is my code
struct ImmerisveContentDebugView: View {
@Environment(ViewModel.self) private var model
@State var wallAnchor: AnchorEntity = {
return AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: SIMD2<Float>(0.1, 0.1)))
}()
@State var originEntity: Entity = {
let originMesh = MeshResource.generateSphere(radius: 0.2)
return ModelEntity(mesh: originMesh, materials: [SimpleMaterial(color: .orange, isMetallic: false)])
}()
@State var xAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 3, height: 0.1, depth: 0.1)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .red, isMetallic: false)])
}()
@State var yAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 0.1, height: 3, depth: 0.1)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .blue, isMetallic: false)])
}()
@State var zAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 0.1, height: 0.1, depth: 3)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .green, isMetallic: false)])
}()
var body: some View {
RealityView { content in
content.add(wallAnchor)
wallAnchor.addChild(originEntity)
wallAnchor.addChild(xAxisEntity)
wallAnchor.addChild(yAxisEntity)
wallAnchor.addChild(zAxisEntity)
}
}
}
And here is what the simualtor renders
Hi, I'm following https://developer.apple.com/metal/jax/ to install jax on my Mac. The installation is successful. However, running the give example gives
$ python -c 'import jax; jax.numpy.arange(10)'
2023-07-27 20:26:08.492162: W pjrt_plugin/src/mps_client.cc:535] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported!
Metal device set to: Apple M2 Pro
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
loc("-":2:3): error: custom op 'func.func' is unknown
fish: Job 1, 'python3 $argv' terminated by signal SIGSEGV (Address boundary error)
I am working on a fully immersive RealityView for visionOS and I need to add light from the sun to my scene. I see that DirectionalLight, PointLight, and SpotLight are not available on visionOS. Does anyone know how to add light to fully immersive scene on visionOS?
My scene is really dark right now without any additional light.
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
I'm currently testing Photogrametry by capturing photos with sample project
https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
Then use them on my laptop with https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app
It worked perfectly until the latest updates of Sonoma BETA.
It started by warning logs in the console saying I lacked depthMap in my samples and now it just refuse to create samples from my HEIC files.
I tried to create HEIC files with and without Depth data to check if it's a bad format of these depth data but it seems it's just the HEIC format itself that is not accepted anymore. I've also just imported HEIC files captured with the standard iOS app and transferred what Photo app and they doesn't work either so it's not an issue of poorly formatted files.
If I convert the files in PNG, it works again but of course, as announced during WWDC 2023, I expect to get the photogrammetry pipeline leverage the LIDAR data !
I check every BETA update waiting for an improvement. I can see the photogrammetry logs are never the same so I guess the apple teams are working on it.
Of course, the object capture model from Reality Composer pro, also doesn't accept HEIC files anymore.
If there are some workarounds, please advise !
Is there support for Blend Shapes in RealityKit? I don't see controls for it in the API and the Blend Shapes that should be on my model aren't appearing in Reality Composer Pro
I am working on creating a "Volume" application in RealityKit for visionOS. I want to create a texture on the CPU that I can hook into a Material and modify. When I go to create the texture I get this error: Linear textures from shared buffers is not supported on this device
Here is the code:
guard let device = MTLCreateSystemDefaultDevice() else {
fatalError("unable to get metal device")
}
let textureDescriptor = MTLTextureDescriptor.textureBufferDescriptor(with: .r32Float, width: 64, resourceOptions: .storageModeShared, usage: .shaderRead)
let buffer = device.makeBuffer(length: 64 * 4)
return buffer?.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 64 * 4)