My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build.
I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it.
But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue...
CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,`
NULL,
shouldInterpolate,
kCGRenderingIntentDefault);
Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats:
Source format: (derived from sourceCGImage)
bitsPerComponent = 8
bitsPerPixel = 32
colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile)
bitmapInfo = kCGBitmapByteOrderDefault
version = 0
decode = 0x000060000147f780
renderingIntent = kCGRenderingIntentDefault
Destination format:
bitsPerComponent = 8
bitsPerPixel = 24
colorSpace = (DeviceRBG)
bitmapInfo = 8197
version = 0
decode = 0x0000000000000000
renderingIntent = kCGRenderingIntentDefault
But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
Hi, I've got a Swift Framework with a bunch of Metal files. Currently users have to manually include a Metal Lib in their bundle provided separately, to use the Swift Package.
First question; Is there a way to make a Metal Lib target in a Swift Package, and just include the .metal files? (without a binary asset)
Second question; If not, Swift 5.3 has resource support, how would you recommend to bundle a Metal Lib in a Swift Package?
, it is after update to Xcode 14.3:
[default] CGSWindowShmemCreateWithPort failed on port 0
Hey folks,
I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max).
The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following:
I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351)
Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;)
Any suggestions appreciated!
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3.
In beta 4, getting these errors:
Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject'
Does anyone have a fix?
Thanks
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0.
I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future?
My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue.
In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either.
I traversed the following system font directories and could not find the PingFang.ttc font file.
/System/Library/Fonts/AppFonts
/System/Library/Fonts/Core
/System/Library/Fonts/CoreAddition
/System/Library/Fonts/CoreUI
/System/Library/Fonts/LanguageSupport
/System/Library/Fonts/UnicodeSupport
/System/Library/Fonts/Watch
Looking for answers, thanks for the help!
After many former OS and Xcode updates, my Game Controller Swift code generates a "DIS-CONNECTED" MESSAGE.
Mac Sequoia 15.2
Xcode 16.2
Tried to update PlayStation controller firmware on my Mac.
Still no luck with Xcode and its use of a game controller with tvOS.
I've been playing with the new GameSave API and cannot get it to work.
I followed the 3-step instructions from the Developer video. Step 2, "Next, login to your Apple developer account and include this entitlement in the provisioning profile for your game." seems to be unnecessary, as Xcode set this for you when you do step 1 "First add the iCloud entitlement to your game."
Running the app on my device and tapping "Load" starts the sync, then fails with the error "Couldn’t communicate with a helper application." I have no idea how to troubleshoot this. Every other time I've used CloudKit it has Just Worked™.
Halp‽
Here is my example app:
import Foundation
import SwiftUI
import GameSave
@main struct GameSaveTestApp: App {
var body: some Scene {
WindowGroup {
GameView()
}
}
}
struct GameView: View {
@State private var loader = GameLoader()
var body: some View {
List {
Button("Load") { loader.load() }
Button("Finish sync") { Task { try? await loader.finish() } }
}
}
}
@Observable class GameLoader {
var directory: GameSaveSyncedDirectory?
func stateChanged() {
let newState = withObservationTracking {
directory?.state
} onChange: {
Task { @MainActor [weak self] in self?.stateChanged() }
}
print("State changed to \(newState?.description ?? "nil")")
switch newState {
case .error(let error):
print("ERROR: \(error.localizedDescription)")
default: _ = 0 // NOOP
}
}
func load() {
print("Opening gamesave directory")
directory = GameSaveSyncedDirectory.openDirectory()
stateChanged()
}
func finish() async throws {
print("finishing syncing")
await directory?.finishSyncing()
}
}
Topic:
Graphics & Games
SubTopic:
General
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
I want to use Swift language to write code for drawing multiple polygons. I would like to find some examples as references. Can anyone provide example code or tell me where I can see such examples? Thank you!
I have a Mac Studio 2023 M2 Max
Running Sonoma 14.6.1
Developing in XCode 16.1
It seems that the NSScreen frame settings may be incorrect. The frame settings received from NSScreen.screens don't seem to match up with the Desktop arrangement settings in the Settings.
Apologies in advance for this long post!
for screen in NSScreen.screens {
let name = screen.localizedName
Globals.logger.debug("Globals initializeScreens - screen \(i) '\(name, privacy: .public)'")
Globals.logger.debug("Globals initializeScreens - '\(screen.debugDescription, privacy: .public)'")
}
This is what I receive in the log:
Globals initializeScreens - '<NSScreen: 0x600000ef4240;
name="PHL 346E2C";
backingScaleFactor=1.000000;
frame={{0, 0}, {3440, 1440}};
visibleFrame={{0, 0}, {3440, 1415}}>'
Globals initializeScreens - screen 2 'Blackmagic (1)'
Globals initializeScreens - '<NSScreen: 0x600000ef42a0;
name="Blackmagic (1)";
backingScaleFactor=1.000000;
frame={{-3840, 0}, {1920, 1080}};
visibleFrame={{-3840, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 3 'Blackmagic (4)'
Globals initializeScreens - '<NSScreen: 0x600000ef4360;
name="Blackmagic (4)";
backingScaleFactor=1.000000;
frame={{-1920, 0}, {1920, 1080}};
visibleFrame={{-1920, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 4 'Blackmagic (2)'
Globals initializeScreens - '<NSScreen: 0x600000ef43c0;
name="Blackmagic (2)";
backingScaleFactor=1.000000;
frame={{5360, 0}, {1920, 1080}};
visibleFrame={{5360, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 5 'Blackmagic (3)'
Globals initializeScreens - '<NSScreen: 0x600000ef4420;
name="Blackmagic (3)";
backingScaleFactor=1.000000;
frame={{3440, 0}, {1920, 1080}};
visibleFrame={{3440, 0}, {1920, 1055}}>'
It looks like the frame settings for Blackmagic (2) and Blackmagic (4) are switched.
The setup has five monitors. Four are using the USB-C Digital AV Multiport Adapters. The output for these are streamed into a rack of A/V equipment using BlackMagic Design mini converters and monitors.
My Swift application allows users to open four movies, one for each of the AV Adapters. The movies can then be played back in sync for later processing by the A/V equipment.
Here are some screen captures that show my display settings.
Blackmagic (1) and Blackmagic (2) are to the left of the main screen.
Blackmagic (3) and Blackmagic(4) are to the right of the main screen.
The desktop is hard to see but is correct.
The wallpaper settings are all correct.
The wallpaper is correctly ordered when displayed on the monitors.
After opening the movies and using the NSScreen frame settings, the displays are incorrectly ordered. Test B and Test D are switched, which is what I would expect given the NSScreen frame values.
Any ideas? I've tried re-arranging the desktops, rebooting, etc. but no luck.
The code that changes the screen location is similar to this post on Stack Overflow
public func setDisplay( screen: NSScreen ) {
Globals.logger.log("MovieWindowController - setDisplay = \(screen.localizedName, privacy: .public)")
Globals.logger.debug("MovieWindowController - setDisplay - '\(screen.debugDescription, privacy: .public)'")
let dx = CGFloat(Constants.midX)
let dy = CGFloat(Constants.midY)
var pos = NSPoint()
pos.x = screen.visibleFrame.midX - dx
pos.y = screen.visibleFrame.midY - dy
Globals.logger.debug("MovieWindowController - setDisplay - x = '\(pos.x, privacy: .public)', y = '\(pos.y, privacy: .public)'")
window?.setFrameOrigin(pos)
}
The log show just what I would expect given the incorrect frame values.
MovieWindowController - setDisplay = Blackmagic (1)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018e8420; name="Blackmagic (1)"; backingScaleFactor=1.000000; frame={{-3840, 0}, {1920, 1080}}; visibleFrame={{-3840, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-3840.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (2)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018a10e0; name="Blackmagic (2)"; backingScaleFactor=1.000000; frame={{5360, 0}, {1920, 1080}}; visibleFrame={{5360, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '5360.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (3)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018cc8a0; name="Blackmagic (3)"; backingScaleFactor=1.000000; frame={{3440, 0}, {1920, 1080}}; visibleFrame={{3440, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '3440.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (4)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018c9ce0; name="Blackmagic (4)"; backingScaleFactor=1.000000; frame={{-1920, 0}, {1920, 1080}}; visibleFrame={{-1920, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-1920.000000', y = '-12.500000'
Am I correct? I think this is driving me crazy!
Thanks in advance!
Edit: The mouse behavior is correct in moving across the displays!
Topic:
Graphics & Games
SubTopic:
General
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc.
The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application.
We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d
Steps to Reproduce:
Load the above mentioned test example on an iPhone running version 18.2 using safari.
Perform volume rendering using the provided dataset.
Measure the time taken by volume for each rotate or drag action.
Repeat the same steps on an iPhone running version 18.1 for comparison.
Additional Information:
Device Model Tested:
iPhone12, iPhone13, iPhone14
iOS Version With Issue:
18.2
18.3(Beta)
I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know.
Thank you.
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build.
macOS 15.1, Xcode 16.2
Can you provide any solutions to solve this problem?
Also submitted Feedback ID #FB18432749
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working.
Wondering if you could help, when it will be fixed and if others are having the same issue.
Many thanks, William.
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
My app isn’t a game app, but iOS 18 always enable Game Mode for my app automatically. I didn’t find a switch to disable it in Xcode.
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS.
I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.