Hello
I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access.
Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way
[renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0]
Unfortunately, all I am getting is the following error (runtime assertion)
-[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation
offset + length(16) must be <= threadgroupMemoryLength(0).`
What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In Xcode version 16.2 (16C5032a) I created:
One [ScenekitApp] [File/New/Project/iOS/Game/Game Technology: SceneKit].
Three custom Frameworks [File/New/Project/iOS/Framework] [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework].
In the four projects the [Minimum Deployments=iOS 15.0], [Swift version=6.0] and [BUILD_LIBRARY_FOR_DISTRIBUTION=Yes].
I directly installed the [Zip.framework] in [ASCSource.framework] without [pod init/pod install] and in the [General tab] [Frameworks, Libraries and Embeded Content] [Zip.framework Embed&Sign].
I installed the three frameworks [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework] in [ScenekitApp] and everything works perfectly in Xcode version 16.2(16C5032a).
I updated Xcode version 16.2(16C5032a) to Xcode version 16.3(16E140) and some errors in the SDKs were indicated.
[Failed to build module 'ASCSource'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
[Failed to build module 'Zip'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
If I recompile all frameworks in Xcode version 16.3 (16E140) it will work, but I don't think it will be a good solution.
I found some discussions in links like the following without success.
https://github.com/firebase/firebase-ios-sdk/issues/13727
https://stackoverflow.com/questions/70556401/swift-version-conflict-this-sdk-is-not-supported-by-the-compiler-please-select
Any help is welcome.
Thanks everyone!!!
I'm building a game with a client-server architecture. Using GKMatch.chooseBestHostingPlayer(_:) rarely works. When I started testing it today, it worked once at the very beginning, and since then it always succeeds on one client and returns nil on the other client. I'm testing with a Mac and an iPhone. Sometimes it fails on the Mac, sometimes on the iPhone. On the device that it succeeds on, the provided host can be the device itself or the other one.
I created FB9583628 in August 2021, but after the Feedback Assistant team replied that they are not able to reproduce it, the feedback never went forward.
import SceneKit
import GameKit
#if os(macOS)
typealias ViewController = NSViewController
#else
typealias ViewController = UIViewController
#endif
class GameViewController: ViewController, GKMatchmakerViewControllerDelegate, GKMatchDelegate {
var match: GKMatch?
var matchStarted = false
override func viewDidLoad() {
super.viewDidLoad()
GKLocalPlayer.local.authenticateHandler = authenticate
}
private func authenticate(_ viewController: ViewController?, _ error: Error?) {
#if os(macOS)
if let viewController = viewController {
presentAsSheet(viewController)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
GKDialogController.shared().present(viewController)
}
#else
if let viewController = viewController {
present(viewController, animated: true)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
present(viewController, animated: true)
}
#endif
}
private func defaultMatchRequest() -> GKMatchRequest {
let request = GKMatchRequest()
request.minPlayers = 2
request.maxPlayers = 2
request.defaultNumberOfPlayers = 2
request.inviteMessage = "Ciao!"
return request
}
func matchmakerViewControllerWasCancelled(_ viewController: GKMatchmakerViewController) {
print("cancelled")
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFailWithError error: Error) {
print(error)
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFind match: GKMatch) {
self.match = match
match.delegate = self
startMatch()
}
func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
print("\(player.gamePlayerID) changed state to \(String(describing: state))")
startMatch()
}
func startMatch() {
let match = match!
if matchStarted || match.expectedPlayerCount > 0 {
return
}
print("starting match with local player \(GKLocalPlayer.local.gamePlayerID) and remote players \(match.players.map({ $0.gamePlayerID }))")
match.chooseBestHostingPlayer { host in
print("host is \(String(describing: host?.gamePlayerID))")
}
}
}
In my SceneKit game I'm able to connect two players with GKMatchmakerViewController. Now I want to support the scenario where one of them disconnects and wants to reconnect. I tried to do this with this code:
nonisolated public func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
Task { @MainActor in
switch state {
case .connected:
break
case .disconnected, .unknown:
let matchRequest = GKMatchRequest()
matchRequest.recipients = [player]
do {
try await GKMatchmaker.shared().addPlayers(to: match, matchRequest: matchRequest)
} catch {
}
@unknown default:
break
}
}
}
nonisolated public func player(_ player: GKPlayer, didAccept invite: GKInvite) {
guard let viewController = GKMatchmakerViewController(invite: invite) else {
return
}
viewController.matchmakerDelegate = self
present(viewController)
}
But after presenting the view controller with GKMatchmakerViewController(invite:), nothing else happens. I would expect matchmakerViewController(_:didFind:) to be called, or how would I get an instance of GKMatch?
Here is the code I use to reproduce the issue, and below the reproduction steps.
Code
Run the attached project on an iPad and a Mac simultaneously.
On both devices, tap the ship to connect to GameCenter.
Create an automatched match by tapping the rightmost icon on both devices.
When the two devices are matched, on iPad close the dialog and tap on the ship to disconnect from GameCenter.
Wait some time until the Mac detects the disconnect and automatically sends an invitation to join again.
When the notification arrives on the iPad, tap it, then tap the ship to connect to GameCenter again. The iPad receives the call player(_:didAccept:), but nothing else, so there’s no way to get a GKMatch instance again.
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app.
I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle).
Here's my settings for the materials.
Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects.
Please let me know if you know anything helpful, much appreciated!
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
I've been working on Swift game which is not yet launched or available for preview.
The game works in such a way that it has idle CPU while the user is thinking and sustained max CPU and GPU on as many cores as possible when he makes a move.
Rarely, due to OS activity or something else outside of my control (for example when dropping the OS curtain even if for just a bit then remove it), the game or some of its threads are moved to efficiency cores which results in major stuttering which persists precisely until the game is idle again at which point the game is moved back on performance cores - but if the player keeps making moves the stuttering simply won't go away and so I guess compuptation is locked onto efficiency cores.
The issue does not reproduce on MacCatalyst on Intel.
How do I tell Swift to avoid efficiency cores?
BTW Swift and SceneKIT have AMAZING performance especially when compared to others.
Hello
If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds.
However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi,
I am working with a large project. We are compiling each material to its own .metallib. They all include many common files full of inline functions. Finally we link it all together at the end with a single big pathtrace kernel. Everything works as expected, however the compile times have gotten completely out of hand and it takes multiple minutes to compile at runtime (to native code). I have gathered that I can do this offline by using metal-tt however if I am wondering if there is a way to reduce the compile times in such a scenario, and how to investigate what the root cause of the problem is. I suspect it could have to do with the fact that every materials metallib contains duplications of all the inline functions. Any ideas on how to profile and debug this?
Thanks,
Rasmus
Hello. In the iOS app i'm working on we are very tight on memory budget and I was looking at ways to reduce our texture memory usage. However I noticed that comparing ASTC8x8 to ASTC12x12, there is no actual difference in allocated memory for most of our textures despite ASTC12x12 having less than half the bpp of 8x8. The difference between the two only becomes apparent for textures 1024x1024 and larger, and even in that case the actual texture data is sometimes only 60% of the allocation size. I understand there must be some alignment and padding going on, but this seems extreme. For an example scene in my app with astc12x12 for most textures there is over a 100mb difference in astc size on disk versus when loaded, so I would love to be able to recover even a portion of that memory.
Here is some test code with some measurements i've taken using an iphone 11:
for(int i = 0; i < 11; i++) {
MTLTextureDescriptor *texDesc = [[MTLTextureDescriptor alloc] init];
texDesc.pixelFormat = MTLPixelFormatASTC_12x12_LDR;
int dim = 12;
int n = 2 << i;
int mips = i+1;
texDesc.width = n;
texDesc.height = n;
texDesc.mipmapLevelCount = mips;
texDesc.resourceOptions = MTLResourceStorageModeShared;
texDesc.usage = MTLTextureUsageShaderRead;
// Calculate the equivalent astc texture size
int blocks = 0;
if(mips == 1) {
blocks = n/dim + (n%dim>0? 1 : 0);
blocks *= blocks;
} else {
for(int j = 0; j < mips; j++) {
int a = 2 << j;
int cur = a/dim + (a%dim>0? 1 : 0);
blocks += cur*cur;
}
}
auto tex = [objCObj newTextureWithDescriptor:texDesc];
printf("%dx%d, mips %d, Astc: %d, Metal: %d\n", n, n, mips, blocks*16, (int)tex.allocatedSize);
}
MTLPixelFormatASTC_12x12_LDR
128x128, mips 7, Astc: 2768, Metal: 6016
256x256, mips 8, Astc: 10512, Metal: 32768
512x512, mips 9, Astc: 40096, Metal: 98304
1024x1024, mips 10, Astc: 158432, Metal: 262144
128x128, mips 1, Astc: 1936, Metal: 4096
256x256, mips 1, Astc: 7744, Metal: 16384
512x512, mips 1, Astc: 29584, Metal: 65536
1024x1024, mips 1, Astc: 118336, Metal: 147456
MTLPixelFormatASTC_8x8_LDR
128x128, mips 7, Astc: 5488, Metal: 6016
256x256, mips 8, Astc: 21872, Metal: 32768
512x512, mips 9, Astc: 87408, Metal: 98304
1024x1024, mips 10, Astc: 349552, Metal: 360448
128x128, mips 1, Astc: 4096, Metal: 4096
256x256, mips 1, Astc: 16384, Metal: 16384
512x512, mips 1, Astc: 65536, Metal: 65536
1024x1024, mips 1, Astc: 262144, Metal: 262144
I also tried using MTLHeaps (placement and automatic) hoping they might be better, but saw nearly the same numbers.
Is there any way to have metal allocate these textures in a more compact way to save on memory?
I have a UIView that displays lines, and I zoom in (scale by 2 on the scroll view zoomScale variable containing the UIView).
As I zoom in, on the Mac version (Designed for IPad) I loose the graphic after a certain number of zooms (the scrollView maximumZoomScale is set at 10).
To ensure that lines are correctly represented, I modify the contentScaleFactor variable on the UIView; otherwise, the line's display is pixelated.
On the IPad (simulator and real) I do not loose the graphic when zooming.
So the Mac port of the UIView drawing is not working as the IPad version.
Everything else of the application works fine except this important details.
I already submitted a feedback request (#FB16829106) with the images showing the problem. I need a solution to this problem.
Thanks.
Anyone else unable to download the "Rendering a Scene with Deferred Lighting in C++" (https://developer.apple.com/documentation/metal/rendering-a-scene-with-deferred-lighting-in-c++?language=objc)?
I just an error page:
Is there another place to download this sample?
Topic:
Graphics & Games
SubTopic:
Metal
Hello!
I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups?
The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block.
I assume this is the default configuration. So when one does the following:
Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H
Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 }
I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile.
Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder?
Will be grateful for help.
Michał
Create the QRCode
CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator;
f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding];
f.correctionLevel = @"Q"; // increase level
CIImage *qrcode = f.outputImage;
Overlay the icon
CIImage *icon = [CIImage imageWithURL:url];
CGAffineTransform *t = CGAffineTransformMakeTranslation(
(qrcode.extent.width-icon.extent.width)/2.0,
(qrcode.extent.height-icon.extent.height)/2.0);
icon = [icon imageByApplyingTransform:t];
qrcode = [icon imageByCompositingOver:qrcode];
Round off the corners
static dispatch_once_t onceToken;
static CIWarpKernel *k;
dispatch_once(&onceToken, ^ {
k = [CIWarpKernel kernelWithFunctionName:name
fromMetalLibraryData:metalLibData()
error:nil];
});
CGRect iExtent = image.extent;
qrcode = [k applyWithExtent:qrcode.extent
roiCallback:^CGRect(int i, CGRect r) {
return CGRectInset(r, -radius, -radius); }
inputImage:qrcode
arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]];
…and this code for the kernel should go in a separate .ci.metal source file:
float2 bend_corners (float4 extent, float s, destination dest)
{
float2 p, dc = dest.coord();
float ratio = 1.0;
// Round lower left corner
p = float2(extent.x+s,extent.y+s);
if (dc.x < p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round lower right corner
p = float2(extent.x+extent.z-s, extent.y+s);
if (dc.x > p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper left corner
p = float2(extent.x+s,extent.y+extent.w-s);
if (dc.x < p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper right corner
p = float2(extent.x+extent.z-s, extent.y+extent.w-s);
if (dc.x > p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
return dc;
}
I have this drawing app that I have been working on for the past few years when I have free time. I recently rebuilt the app in Metal to build out other brushes and improve performance, need to render 10000s of lines in realtime.
I’m running into this issue trying to create a uniform opacity per path. I have a solution but do not love it - as this is a realtime app and the solution could have some bottlenecks. If I just generate a triangle strip from touch points and do my best to smooth, resample, and handle miters I will always get some overlaps. See:
To create a uniform opacity I render to an offscreen texture with blending disabled. I then pre-multiply the color and draw that texture to a composite texture with blending on (I do this per path). This works but gets tricky when you introduce a textured brush, the edges of the texture in the frag shader cut out the line.
Pasted Graphic 1.png
Solution: I discard below a threshold
fragment float4 fragment_line(VertexOut in [[stage_in]],
texture2d<float> texture [[ texture(0) ]]) {
constexpr sampler s(coord::normalized, address::mirrored_repeat, filter::linear);
float2 texCoord = in.texCoord;
float4 texColor = texture.sample(s, texCoord);
if (texColor.a < 0.01) discard_fragment(); // may be slow (from what I read)
return in.color * texColor;
}
Better but still not perfect.
Question: I'm looking for better ways to create a uniform opacity per path. I tried .max blending but that will cause no blending of other paths. Any tips, ideas, much appreciated. If this is too detailed of a question just achieve.
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
We are trying to implement saving and fetching data to and from iCloud, but it have some problems.
MacOS: 15.3
Here is what I do:
Enable Game Center and iCloud capbility in Signing & Capabilities, pick iCloud Documents, create and select a Container.
Sample code:
void SaveDataToCloud( const void* buffer, unsigned int datasize, const char* name )
{
if(!GKLocalPlayer.localPlayer.authenticated) return;
NSData* data = [ NSData dataWithBytes:databuffer length:datasize];
NSString* filename = [ NSString stringWithUTF8String:name ];
[[GKLocalPlayer localPlayer] saveGameData:data withName:filename completionHandler:^(GKSavedGame * _Nullable savedGame, NSError * _Nullable){
if (error != nil)
{
NSLog( @"SaveDataToCloud error:%@", [ error localizedDescription ] );
}
}];
}
void FetchCloudSavedGameData()
{
if ( !GKLocalPlayer.localPlayer.authenticated ) return;
[ [ GKLocalPlayer localPlayer ] fetchSavedGamesWithCompletionHandler:^(NSArray<GKSavedGame *> * _Nullable savedGames, NSError * _Nullable error) {
if ( error == nil )
{
for ( GKSavedGame *item in savedGames )
{
[ item loadDataWithCompletionHandler:^(NSData * _Nullable data, NSError * _Nullable error) {
if ( error == nil )
{
//handle data
}
else
{
NSLog( @"FetchCloudSavedGameData failed to load iCloud file: %@, error:%@", item.name, [ error localizedDescription ] );
}
} ];
}
}
else
{
NSLog( @"FetchCloudSavedGameData error:%@", [ error localizedDescription ] );
}
} ];
}
Both saveGameData and fetchSavedGamesWithCompletionHandler are not reporting any error, when debugging, saveGameData completionHandler got a nil error, and can get a valid "savedGame", but when try to rebot the game and use "fetchSavedGamesWithCompletionHandler" to fetch data, we got nothing, no error reported, and the savedGames got a 0 length.
From this page https://developer.apple.com/forums/thread/718541?answerId=825596022#825596022
we try to wait 30sec after authenticated , then try fetchSavedGamesWithCompletionHandler, still got the same error.
Checked:
Game Center and iCloud are enabled and login with the same account.
iCloud have enough space to save.
So what's wrong with it.
In this video, tile fragment shading is recommended for image processing. In this example, the unpack function takes two arguments, one of which is RasterizerData. As I understand it, this is the data passed to us from the previous stage (Vertex) of the graphics pipeline.
However, the properties of MTLTileRenderPipelineDescriptor do not include an option for specifying a Vertex function. Therefore, in this render pass, a mix of commands is used: first, a draw command is executed to obtain UV coordinates, and then threads are dispatched.
My question is: without using a draw command, only dispatch, how can I get pixel coordinates in the fragment tile function? For the kernel tile function, everything is clear.
typedef struct
{
float4 OPTexture [[ color(0) ]];
float4 IntermediateTex [[ color(1) ]];
} FragmentIO;
fragment FragmentIO Unpack(RasterizerData in [[ stage_in ]],
texture2d<float, access::sample> srcImageTexture [[texture(0)]])
{
FragmentIO out;
//...
// Run necessary per-pixel operations
out.OPTexture = // assign computed value;
out.IntermediateTex = // assign computed value;
return out;
}
In my project, I have several nodes (SCNNode) with some levels of detail (SCNLevelOfDetail) and everything works correctly, but when I add animation using morphing (SCNMorpher), the animation works correctly but without the levels of detail. Note: the entire scene is created in Autodesk 3D Studio Max and then exported in (.ASE) format.
The goal is to make animations using morphing that have some levels of detail.
Does anyone know if SCNMorpher supports geometry with some levels of detail?
I appreciate any information about this case.
Thanks everyone!!!
Part of the code I use to load geometries (SCNGeometry) with some levels of detail (SCNLevelOfDetail) using morphing (SCNMorpher).
node.morpher = [SCNMorpher new];
SCNGeometry *geometry = [self geometryWithMesh:mesh];
NSMutableArray <SCNLevelOfDetail*> *mutLevelOfDetail = [NSMutableArray arrayWithCapacity:self.mutLevelsOfDetail.count];
for (int i = 0; i < self.mutLevelsOfDetail.count; i++) {
ASCGeomObject *geomObject = self.mutLevelsOfDetail[i];
SCNGeometry *geometry = [self geometryWithMesh:geomObject.mesh.mutMeshAnimation[i]];
[mutLevelOfDetail addObject:[SCNLevelOfDetail levelOfDetailWithGeometry:geometry worldSpaceDistance:geomObject.worldSpaceDistance]];
}
geometry.levelsOfDetail = mutLevelOfDetail;
node.morpher.targets = [node.morpher.targets arrayByAddingObject:geometry];
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background...
could i get a metallic surface getting accurate reflections of a box on top ?
i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app