I have tested SpatialUpsacling with a Unity URP sample project: https://github.com/mao-test-h/MetalFXSamples.
I used iPhone 13 with iOS Beta 7. The performance and quality are both worse than Native.
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I downloaded the code example of Capturing depth using the LiDAR camera. https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
Im on iPad Pro 2nd Generation, iPadOS version 16.6
Im running the code and the app crashes with error:
Fatal error: Unable to configure the capture session.
2023-09-08 12:56:44.761898-0400 LiDARDepth[2393:828514]
Is there correct version of the code?
How do I create & open an immersive space window scene from a UIKit view or view controller? I need to create one in order to use Compositor Services in order to draw a 3D object using Metal, but this particular GUI is drawn & laid out using UIKit, and it isn't possible for me to rewrite it to use SwiftUI.
I already tried [UIApplication.sharedApplication activateSceneSessionForRequest:[UISceneSessionActivationRequest requestWithRole:UISceneSessionRoleImmersiveSpaceApplication] errorHandler:...], but all that happened was it opened a new window for the main application scene (UIWindowSceneSessionRoleApplication), instead of opening an immersive space scene as I expected.
Yes, I did create a scene manifest in my app's Info.plist, with a UIWindowSceneSessionRoleApplication scene, and a CPSceneSessionRoleImmersiveSpaceApplication scene. Surely there has to be a way to do this without resorting to SwiftUI...
Trying to build a volume scene, and get this error.
Build/B3/Libraries/ARM64/Packages/com.unity.xr.visionos/Runtime/VisionOSNativeBridge.mm:457:33 Use of undeclared identifier 'ar_plane_extent_get_plane_anchor_from_plane_extent_transform'
The line in the file is:
simd_float4x4 worldMatrix = ar_plane_extent_get_plane_anchor_from_plane_extent_transform(plane_extent);
I previously got this error in Xcode 15 beta 8 and now in Xcode 15 beta 2.
Unity 2022.3.5f1 LTS
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
With the Xcode 15 RC, the documentation for Metal Pipelines Script (man metal-pipelines-script) doesn't mention anything about defining a Mesh Render Pipeline (MTLMeshRenderPipelineDescriptor).
Is there way to do offline compilation OR harvest binary archives for mesh shaders?
Hello,
I'm trying to optimize code of loading half2 vectors from thread group(or constant) memory, for example,
//option A, read once(?) and then unpack
#define load_4half2(x, y, z, w, p, i) do{
uint4 readU4 = * ((threadgroup uint4* )(p+i));
x = as_type(readU4.x);
y = as_type(readU4.y);
z = as_type(readU4.z);
w = as_type(readU4.w);
}while(0)
//option B, read one by one
#define load_4half2(x, y, z, w, p, i) do{
threadgroup half2* readU4 = ((threadgroup half2*)(p+i));
x = readU4[0];
y = readU4[1];
z = readU4[2];
w = readU4[3];
}while(0)
I haven't figure out how to get "disassembled" code, thus I'm confused which is best solution for this problem. Could anyone kindly help to shed some lights on this?
Thanks a lot!
Since the type identifiers in UTCoreTypes.h have been deprecated, what's the expected way to use the Core Graphics APIs that use those types, particularly in C code that doesn't have access to the UniformTypeIdentifiers framework?
Using CFSTR( "public.jpeg" ) works, but is that the new best practice, or have the core type definitions been moved/renamed?
Looking at the documentation for the methods to create MTLRenderPipelineStates
I'm trying to understand the differences between the different RenderPipelineStates created by using:
MTLRenderPipelineStateDesciptor (5 methods)
MTLTileRenderPipelineDescriptor (3 methods)
MTLMeshRenderPipelineDescriptor (2 methods)
Not all methods that exist for the MTLRenderPipelineDescriptor case exist for the Tile and Mesh variants and I was wondering why. The only way to synchronously make a Mesh PipelineState is currently by this method:
func makeRenderPipelineState(
descriptor: MTLMeshRenderPipelineDescriptor,
options: MTLPipelineOption
) throws -> (MTLRenderPipelineState, MTLRenderPipelineReflection?)
which also creates a MTLRenderPipelineReflection?
Is there a clear reason for that which I just fail to understand? Or are these methods just not there at the moment?
The example code in https://developer.apple.com/wwdc22/10162 does not compile for example.
// initialize pipeline state object
var meshPipeline: MTLRenderPipelineState!
do {
meshPipeline = try device.makeRenderPipelineState(descriptor: meshPipelineDescriptor)
} catch {
print(“Error when creating pipeline state: \(error)\”)
}
Its said that A17 will support hardware raytracing. It will support both RayQuery (Inline Ray Tracing) and RayPipeline features, right? And we can use ray tracing function on compute and graphics shaders, right?
Can we custom intersection test function? How can we handle sdf intersection test in ray tracing pipeline?
I've got the following code to generate an MDLMaterial from my own material data model:
public extension MaterialModel {
var mdlMaterial: MDLMaterial {
let f = MDLPhysicallyPlausibleScatteringFunction()
f.metallic.floatValue = metallic
f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0)
f.roughness.floatValue = roughness
return MDLMaterial(name: name, scatteringFunction: f)
}
}
When exporting to OBJ, I get the expected material properties:
# Apple ModelI/O MTL File: testExport.mtl
newmtl material_1
Kd 0.163277 0.0344635 0.229603
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 0
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
newmtl material_2
Kd 0.814449 0.227477 0.124541
Ka 0 0 0
Ks 0
ao 0
subsurface 0
metallic 0
specularTint 0
roughness 1
anisotropicRotation 0
sheen 0.05
sheenTint 0
clearCoat 0
clearCoatGloss 0
However when exporting USD I just get:
#usda 1.0
(
defaultPrim = "_0"
endTimeCode = 0
startTimeCode = 0
timeCodesPerSecond = 60
upAxis = "Y"
)
def Xform "Obj0"
{
def Mesh "_"
{
uniform bool doubleSided = 0
float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
}
def Mesh "_0"
{
uniform bool doubleSided = 0
float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)]
int[] faceVertexCounts = ...
int[] faceVertexIndices = ...
point3f[] points = ...
matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )
uniform token[] xformOpOrder = ["xformOp:transform"]
}
}
There aren't any material properties.
FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html
(Note: there is no tag for ModelIO, so using SceneKit, etc.)
Hi all, I have tried a simple search and ended up here. I am trying to debug an issue with testing my game using GPTK. It looks like the usual debug flags do not apply to GPTK, anyone have an idea of how I can see what is being called through the GPTK or wine as my game loads? I am getting a generation error and I believe a dll is responsible as it checks for CPU id, I assume something is missing in wine's windows structure or isn't in the wrapper's windows registry or this is an api not implemented fully. I am not sure how to debug this or see what is really going on at this point. Any help is greatly appreciated.
Is it advisable to integrate Augmented Reality (AR) into our e-commerce mobile app? Please share insights on the benefits and potential drawbacks, as we aim to enhance the user experience and drive sales. Your expertise is greatly appreciated.
Anybody has noticed pivot issue in constructed model through object capture.
Ideally pivot of object should be centre of bounding box but with new macOS changes now pivot is at 0,0,0 (below the bounding box)
Here is a quick comparison
Old v/s new
Is there a way to combine roomplan api with arkit+metalkit to get both pointcloud .ply output and roomplan 3d room model .usdz?
I'm trying to convert GLB file to USDZ using Reality Converter. According to Khronos gltf validator, it is valid without warnings.
The file contains single animation. It is possible to get any more information about error that happened than this?
The file I'm trying to convert is https://www.dropbox.com/scl/fi/ng38kle0drr4srypw5ntz/Project-Name-copy-1-1.glb?rlkey=ay4d1q3n1ykyixe3krlvnfeyb&dl=1 (I cannot attach it here because limit is 244.140625Kb...)
The workaround to get this conversion working is to remove the animation, or change order or objects. Removing animation is not an option, and changing order of objects is bit of a magic solution that cannot really be automated without understanding what is happening
I've downloaded Reality Converter from here https://developer.apple.com/augmented-reality/tools/ today, so I assume that is latest version
I'm am planning to use CoreGraphics for its low-level functionality, so I wrote up a small snippet that I expected to work:
#include <CoreGraphics/CoreGraphics.h>
int main() {
double rot = CGDisplayRotation(0);
printf("Rotation %f\n", rot);
return 0;
}
Unfortunately, it seems that the call CGDisplayRotation blocks. When I tried writing a similar snippet in XCode though, it works just fine but I will not be able to use XCode for unrelated reasons. Am I compiling incorrectly? Could this be a permission issue?
I compiled with clang -Wall -framework CoreGraphics core.c -o core.o
I am using the RoomPlanExampleApp to export a USDZ. When I open the USDZ on Reality Converter, I get this error
Invalid Structure in USDZ file
The root layer of the package must be a usdc file and must not include any external dependencies that participate in stage composition.
How is this a valid stack trace with mangled symbol names and file/line information? I've already demangled the name, even though Windows does this for me.
The recommended approach to get file/line seems to be to proc "atos" process repeatedly on the symbols to turn them all into file/line. But shouldn't there just be a function call for this, or option to backtrace_symbols() since it's looking up the symbol anyways.
I don't get how this external process call would work for iOS, and it seems slow for macOS as well.
Compare this with Window CaptureStackBackTrace, and then there is a simple function call via DbgHelp.lib to retrieve the file/line. Am I supposed to somehow use the CoreSymbolicate framework on macOS/iOS?
Hello Apple community,
I hope this message finds you well. I'm writing to report an issue that I've encountered after upgrading my iPad to iPadOS 17. The problem seems to be related to the Quick Look AR application, which I use extensively for 3D modeling and visualization.
Prior to the upgrade, everything was working perfectly fine. I create 3D models in Reality Composer and export them as USDZ files for use with Quick Look AR. However, after the upgrade to iPadOS 17, I've noticed a rather troubling issue.
Problem Description:
When I view my 3D models using Quick Look AR on iPadOS 17, some of the 3D models exhibit a peculiar problem. Instead of displaying the correct textures, they show a bright pink texture in their place. This issue occurs only when I have subsequent scenes added to the initial scene. Strangely, the very first scene in the sequence displays the textures correctly.
Steps to Reproduce:
Create a 3D model in Reality Composer.
Export the model as a USDZ file.
Open the USDZ file using Quick Look AR.
Observe that the textures appear correctly on the initial scene.
Add additional scenes to the model.
Navigate to the subsequent scenes.
Notice that some of the 3D models display a pink texture instead of the correct textures (see picture).
Expected Behavior:
The 3D models should consistently display their textures, even when multiple scenes are added to the scene sequence.
Workaround:
As of now, there doesn't seem to be a viable workaround for this issue, which is quite problematic for my work in 3D modeling and visualization.
I would greatly appreciate any insights, solutions, or workarounds that the community might have for this problem. Additionally, I would like to know if others are experiencing the same issue after upgrading to iPadOS 17. This information could be helpful for both users and Apple in addressing this problem.
Thank you for your attention to this matter, and I look forward to hearing from the community and hopefully finding a resolution to this Quick Look AR issue.
Best regards