Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi
Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity
Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
I work on a team that provides an SDK for another game to handle various tasks like authentication. They are experiencing a case where devices using iOS 17 are failing to authenticate with GameCenter, receiving the message "The requested operation could not be completed because local player has not been authenticated." We imagine this is because they still have some setup to finish regarding GameCenter itself, and we're working with them to take care of that.
However, on iOS 18, their app ends up waiting indefinitely for GameCenter authentication messages that it never receives. That's where we're puzzled. We expect them to have the same outcome regardless of OS version.
We initiate GameCenter authentication by setting an authenticateHandler after some initial application setup. The handler has code to account for UI, errors, and successful authentication.
On iOS 17, it's clear that it's getting called as expected because they receive an indication that the player isn't authenticated. But on iOS 18, it looks like the same handler code on iOS 18 isn't being called at all. Are there differences in how iOS 18 interacts with the authenticationHandler that we somehow aren't accounting for? Or is there potentially something else that we're doing incorrectly that is manifesting only on iOS 18?
Here's a simplified version of our login function code (in Obj-C++). There is no OS-specific code, and the job that owns this function does stay in scope until after authentication is complete.
void beginLogin()
{
// Snip: Check if the user is already logged in.
// Snip: Prevent multiple concurrent calls to this function.
auto authenticateHandler = ^(UIViewController* gcViewController, NSError* error)
{
if (gcViewController != nil)
{
// Snip: Display the UI
}
else if (error != nil)
{
// Snip: Handle the error.
}
else
{
if ([[GKLocalPlayer localPlayer] isAuthenticated])
{
// Snip: Handle successful authentication.
}
else
{
// Snip: Handle other case.
}
}
};
[[GKLocalPlayer localPlayer] setAuthenticateHandler: authenticateHandler];
}
Hello,
We are experimenting with Metal to accelerate some peculiar numerical computation. Our workloads are relatively small, so the ability to avoid moving data to and from the GPU's memory is very appealing. However, we are observing higher overhead compared to CUDA, which negates the benefits of avoiding data transfer.
In our tests using an empty kernel, CUDA completes in 0.001 ms (Intel i7 10700K, RTX 3080), while Metal's waitUntilCompleted takes 0.12 ms (M2 Max). As we do not have prior experience with Metal, we are wondering if we are using the APIs just fine and this timing is expected, or if there is a way to reduce it.
Thank you in advance for any comment!
test-metal.cpp
In my SceneKit game I'm able to connect two players with GKMatchmakerViewController. Now I want to support the scenario where one of them disconnects and wants to reconnect. I tried to do this with this code:
nonisolated public func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
Task { @MainActor in
switch state {
case .connected:
break
case .disconnected, .unknown:
let matchRequest = GKMatchRequest()
matchRequest.recipients = [player]
do {
try await GKMatchmaker.shared().addPlayers(to: match, matchRequest: matchRequest)
} catch {
}
@unknown default:
break
}
}
}
nonisolated public func player(_ player: GKPlayer, didAccept invite: GKInvite) {
guard let viewController = GKMatchmakerViewController(invite: invite) else {
return
}
viewController.matchmakerDelegate = self
present(viewController)
}
But after presenting the view controller with GKMatchmakerViewController(invite:), nothing else happens. I would expect matchmakerViewController(_:didFind:) to be called, or how would I get an instance of GKMatch?
Here is the code I use to reproduce the issue, and below the reproduction steps.
Code
Run the attached project on an iPad and a Mac simultaneously.
On both devices, tap the ship to connect to GameCenter.
Create an automatched match by tapping the rightmost icon on both devices.
When the two devices are matched, on iPad close the dialog and tap on the ship to disconnect from GameCenter.
Wait some time until the Mac detects the disconnect and automatically sends an invitation to join again.
When the notification arrives on the iPad, tap it, then tap the ship to connect to GameCenter again. The iPad receives the call player(_:didAccept:), but nothing else, so there’s no way to get a GKMatch instance again.
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
Hello
I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access.
Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way
[renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0]
Unfortunately, all I am getting is the following error (runtime assertion)
-[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation
offset + length(16) must be <= threadgroupMemoryLength(0).`
What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
I am trying to create an empty metadata, and set the HDRGainMapHeadroom at xxx. However the final returned mutableMetadata doesn't contain the HDRGainMap:HDRGainMapVersion or HDRGainMap:HDRGainMapHeadroom. But iio:hasXMP exist.
why? Is that the reason that the namespace HDRGainMap is private?
func createHDRGainMapMetadata(version: Int, headroom: Double) -> CGImageMetadata? {
// Create a mutable metadata object
let mutableMetadata = CGImageMetadataCreateMutable()
// Define the namespace for HDRGainMap
let namespace = "HDRGainMap"
let xmpKeyPath = "iio:hasXMP"
let xmpValue = String(true)
// Set the HDRGainMapVersion item
let versionKeyPath = "\(namespace):HDRGainMapVersion"
let versionValue = String(version)
// Set the version value
let xmpSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, xmpKeyPath as CFString, xmpValue as CFString)
if xmpSetResult == false {
print("Failed to set xmp")
}
// Set the version value
let versionSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, versionKeyPath as CFString, versionValue as CFString)
if versionSetResult == false {
print("Failed to set HDRGainMapVersion")
}
// Set the HDRGainMapHeadroom item
let headroomKeyPath = "\(namespace):HDRGainMapHeadroom"
let headroomValue = String(headroom)
// Set the headroom value
let headroomSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, headroomKeyPath as CFString, headroomValue as CFString)
if headroomSetResult == false {
print("Failed to set HDRGainMapHeadroom")
}
return mutableMetadata
}
Hi,
I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one.
One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner.
Have anyone faced similar situation?
Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc)
Regards,
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18.
Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18:
import SwiftUI
import RealityKit
struct PassportRealityView: View {
let qrCodeCenter: SIMD3<Float>
let assetID: String
var body: some View {
RealityView { content, attachments in
// Setup your AR content, such as markers or 3D models
if let qrAnchor = try? await Entity(named: "QRAnchor") {
qrAnchor.position = qrCodeCenter
content.add(qrAnchor)
}
} attachments: {
Attachment(id: "passportTextAttachment") {
Text(assetID)
.font(.title3)
.foregroundColor(.white)
.background(Color.black.opacity(0.7))
.padding(5)
.cornerRadius(5)
}
}
.frame(width: 300, height: 400)
}
}
When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit.
As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
Hi Apple,
In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering?
Thank you very much.
Hello!
I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups?
The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block.
I assume this is the default configuration. So when one does the following:
Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H
Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 }
I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile.
Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder?
Will be grateful for help.
Michał
Anyone else unable to download the "Rendering a Scene with Deferred Lighting in C++" (https://developer.apple.com/documentation/metal/rendering-a-scene-with-deferred-lighting-in-c++?language=objc)?
I just an error page:
Is there another place to download this sample?
Topic:
Graphics & Games
SubTopic:
Metal
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine.
Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer.
However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?
I have a UIView that displays lines, and I zoom in (scale by 2 on the scroll view zoomScale variable containing the UIView).
As I zoom in, on the Mac version (Designed for IPad) I loose the graphic after a certain number of zooms (the scrollView maximumZoomScale is set at 10).
To ensure that lines are correctly represented, I modify the contentScaleFactor variable on the UIView; otherwise, the line's display is pixelated.
On the IPad (simulator and real) I do not loose the graphic when zooming.
So the Mac port of the UIView drawing is not working as the IPad version.
Everything else of the application works fine except this important details.
I already submitted a feedback request (#FB16829106) with the images showing the problem. I need a solution to this problem.
Thanks.
The flushContextInternal function in glr_sync.mm:262 called abort internally. What caused this? Was it due to high device temperature or some other reason?
Date/Time: 2024-08-29 09:20:09.3102 +0800
Launch Time: 2024-08-29 08:53:11.3878 +0800
OS Version: iPhone OS 16.7.10 (20H350)
Release Type: User
Baseband Version: 8.50.04
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Triggered by Thread: 0
Thread 0 name:
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000001ed053198 __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001fc5e25f8 pthread_kill + 208 (pthread.c:1670)
2 libsystem_c.dylib 0x00000001b869c4b8 abort + 124 (abort.c:118)
3 AppleMetalGLRenderer 0x00000002349f574c GLDContextRec::flushContextInternal() + 700 (glr_sync.mm:262)
4 DiSpecialDriver 0x000000010824b07c Di::RHI::onRenderFrameEnd() + 184 (RHIDevice.cpp:118)
5 DiSpecialDriver 0x00000001081b85f8 Di::Client::drawFrame() + 120 (Client.cpp:155)
2024-08-27_14-44-10.8104_+0800-07d9de9207ce4c73289507e608e5de4320d02ccf.crash
Topic:
Graphics & Games
SubTopic:
Metal
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background...
could i get a metallic surface getting accurate reflections of a box on top ?
i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
Using Swift, how do I resize individual SKSpriteNodes for various iPad device sizes?
Currently, I use theScene.scaleMode = .resizeFill for scaling the entire SKScene and it works as advertised. Please note that .aspectFill does not solve the challenge described below.
My challenge is to resize individual SKSpriteNodes (that are components of the overall SKScene) based on size of the iOS device; for example, iPad Mini <--> iPad Pro
Right now, I hard code the sizes of these Nodes. But I realize this is not the optimum approach.
Again, hard coding is frowned upon. So I am looking for a more dynamic method that functions based on device size.
Topic:
Graphics & Games
SubTopic:
SpriteKit