Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
Topic:
Graphics & Games
SubTopic:
General
I want to know why there is no video frame data when RePlaykit enters the background and then enters the foreground?
Hi guys! Is there any way to get a frame at certain time? I'm writing plug-in and want to use 2 frames before and 2 frames after current frame in order to render final image.
Topic:
Graphics & Games
SubTopic:
General
I want to implement the ability to apply Lightroom Preset (.xmp file) to an image in my app, but am running into difficulties. How can I configure things like color grading, curve, etc. in Swift?
I’m trying to build my project using Unreal Engine 5.4 for iOS. I use SocketIO to connect to the backend. The plugin SocketIOClient version 2.8.0, is used for this.
When trying to connect to the socket, a crash occurs. This only happens on iOS, only in Distribution builds, and only on Unreal 5.4. There are no problems on Unreal 5.2. Callstack:
crashlog.crash
The callstack may be slightly different, but the problem is always when allocating or deallocating memory. I suspect that this may be a race condition and is related to some peculiarities of working with memory on iOS. There are also several similar issues in the plugin repository, for example this one.
I tried using other versions of plugin, other versions of xcode, tried to build socket io using both c++20 and c++17, nothing helps. Does anyone know what can be done about this?
Topic:
Graphics & Games
SubTopic:
General
__builtin_ia32_cvtb2mask512() is the GNU C builtin for vpmovb2m k, zmm.
The Intel intrinsic for it is _mm512_movepi8_mask.
It extracts the most-significant bit from each byte, producing an integer mask.
The SSE2 and AVX2 instructions pmovmskb and vpmovmskb do the same thing for 16 or 32-byte vectors, producing the mask in a GPR instead of an AVX-512 mask register. (_mm_movemask_epi8 and _mm256_movemask_epi8).
I would like an implementation for ARM that is faster than below
I would like an implementation for ARM NEON
I would like an implementation for ARM SVE
I have attached a basic scalar implementation in C. For those trying to implement this in ARM, we care about the high bit, but each byte's high bit (in a 128bit vector), can be easily shifted to the low bit using the ARM NEON intrinsic: vshrq_n_u8(). Note that I would prefer not to store the bitmap to memory, it should just be the return value of the function similar to the following function.
#define _(n) __attribute((vector_size(1<<n),aligned(1)))
typedef char V _(6); // 64 bytes, 512 bits
typedef unsigned long U;
#undef _
U generic_cvtb2mask512(V v) {
U mask=0;int i=0;
while(i<64){
// shift mask by 1 and OR with MSB of v[i] byte
mask=(mask<<1)|((v[i]&0x80)>>7);
i++;}
return mask;
}
This is also a dup of : https://stackoverflow.com/questions/79225312
Topic:
Graphics & Games
SubTopic:
General
I am trying to create an empty metadata, and set the HDRGainMapHeadroom at xxx. However the final returned mutableMetadata doesn't contain the HDRGainMap:HDRGainMapVersion or HDRGainMap:HDRGainMapHeadroom. But iio:hasXMP exist.
why? Is that the reason that the namespace HDRGainMap is private?
func createHDRGainMapMetadata(version: Int, headroom: Double) -> CGImageMetadata? {
// Create a mutable metadata object
let mutableMetadata = CGImageMetadataCreateMutable()
// Define the namespace for HDRGainMap
let namespace = "HDRGainMap"
let xmpKeyPath = "iio:hasXMP"
let xmpValue = String(true)
// Set the HDRGainMapVersion item
let versionKeyPath = "\(namespace):HDRGainMapVersion"
let versionValue = String(version)
// Set the version value
let xmpSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, xmpKeyPath as CFString, xmpValue as CFString)
if xmpSetResult == false {
print("Failed to set xmp")
}
// Set the version value
let versionSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, versionKeyPath as CFString, versionValue as CFString)
if versionSetResult == false {
print("Failed to set HDRGainMapVersion")
}
// Set the HDRGainMapHeadroom item
let headroomKeyPath = "\(namespace):HDRGainMapHeadroom"
let headroomValue = String(headroom)
// Set the headroom value
let headroomSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, headroomKeyPath as CFString, headroomValue as CFString)
if headroomSetResult == false {
print("Failed to set HDRGainMapHeadroom")
}
return mutableMetadata
}
Hi all,
I have been trying to get Apple's assistive touch's snap to item to work for a unity game built using Apple's Core & Accessibility API. The switch control recognises these buttons however, eye tracking will not snap to them. The case in which it needs to snap is when an external eye tracking device is connected and utilises assistive touch & assistive touch's snap to item.
All buttons in the game have a AccessibilityNode with the trait 'Button' on them & an appropriate label, which, following the documentation and comments on the developer forum, should allow them to be recognised by snap to item.
This is not the case, devices (iPads and iPhones) do not recognise the buttons as a snap to target.
Does anyone know why this is the case, and if this is a bug?
Hello,
I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code:
//prepare assets
let asset = AVURLAsset(url: some_url)
let assetReader = try AVAssetReader(asset: asset)
guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else {
throw SomeErrorCode.error
}
var readerSettings: [String: Any] = [
kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]()
]
//check if HDR video
var isHDRDetected: Bool = false
let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo)
if hdrTracks.count > 0 {
readerSettings[AVVideoAllowWideColorKey as String] = true
readerSettings[kCVPixelBufferPixelFormatTypeKey as String] =
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange
isHDRDetected = true
}
//add output to assetReader
let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings)
guard assetReader.canAdd(output) else {
throw SomeErrorCode.error
}
assetReader.add(output)
guard assetReader.startReading() else {
throw SomeErrorCode.error
}
//add writer ouput settings
let videoOutputSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1920,
AVVideoHeightKey: 1080,
]
let finalPath = "//some URL oath"
let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov)
guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video)
else {
throw SomeErrorCode.error
}
let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
let sourcePixelAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected
? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB,
kCVPixelBufferWidthKey as String: 1920,
kCVPixelBufferHeightKey as String: 1080,
]
//create assetAdoptor
let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(
assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes)
guard assetWriter.canAdd(assetWriterInput) else {
throw SomeErrorCode.error
}
assetWriter.add(assetWriterInput)
guard assetWriter.startWriting() else {
throw SomeErrorCode.error
}
assetWriter.startSession(atSourceTime: CMTime.zero)
//prepare tranfer session
var session: VTPixelTransferSession? = nil
guard
VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session)
== noErr, let session
else {
throw SomeErrorCode.error
}
guard let pixelBufferPool = assetAdaptor.pixelBufferPool else {
throw SomeErrorCode.error
}
//read through frames
while let nextSampleBuffer = output.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else {
return
}
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp
let attachment = [
kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020,
kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020,
kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ,
]
CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate)
//now convert to CIImage with HDR data
let image = CIImage(cvPixelBuffer: imageBuffer)
let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first:
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp
guard
let cgImage = context.createCGImage(
cropped, from: cropped.extent, format: .RGBA16,
colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!)
else {
continue
}
//finally convert it back to CIImage
let newScaledImage = CIImage(cgImage: cgImage)
//now write it to a new pixelBuffer
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
]
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height),
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary,
&pixelBuffer)
guard let pixelBuffer else {
continue
}
context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference
var pixelTransferBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer)
guard let pixelTransferBuffer else {
continue
}
// Transfer the image to the pixel buffer.
guard
VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer)
== noErr
else {
continue
}
//finally append to taggedBuffer
}
}
assetWriterInput.markAsFinished()
await assetWriter.finishWriting()
The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
Hi,
I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all.
Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work
This is the code from the scanview
import SwiftUI
import Vision
import VisionKit
struct ScanCardView: UIViewControllerRepresentable {
@Binding var scannedCoordinates: [String: String]
var useLettersForColumns: Bool
var numberOfColumns: Int
var numberOfRows: Int
@Environment(.presentationMode) var presentationMode
func makeUIViewController(context: Context) -> VNDocumentCameraViewController {
let scannerVC = VNDocumentCameraViewController()
scannerVC.delegate = context.coordinator
return scannerVC
}
func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(self)
}
class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate {
let parent: ScanCardView
init(_ parent: ScanCardView) {
self.parent = parent
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
print("Escaneo completado, procesando imagen...")
guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else {
print("No se pudo obtener la imagen del escaneo.")
controller.dismiss(animated: true, completion: nil)
return
}
recognizeText(from: image)
DispatchQueue.main.async {
print("Finalizando proceso OCR y cerrando la cámara.")
controller.dismiss(animated: true, completion: nil)
}
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
print("Escaneo cancelado por el usuario.")
controller.dismiss(animated: true, completion: nil)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print("Error en el escaneo: \(error.localizedDescription)")
controller.dismiss(animated: true, completion: nil)
}
private func recognizeText(from image: CGImage) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))")
DispatchQueue.main.async {
self.parent.presentationMode.wrappedValue.dismiss()
}
return
}
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
print("Texto reconocido: \(recognizedStrings)")
let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings)
DispatchQueue.main.async {
print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)")
self.parent.scannedCoordinates = filteredCoordinates
}
}
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(cgImage: image, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([request])
print("OCR completado y datos procesados.")
} catch {
print("Error al realizar la solicitud de OCR: \(error.localizedDescription)")
}
}
}
private func filterValidCoordinates(from strings: [String]) -> [String: String] {
var result: [String: String] = [:]
print("Texto antes de filtrar: \(strings)")
for string in strings {
let trimmedString = string.replacingOccurrences(of: " ", with: "")
if parent.useLettersForColumns {
let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (letras): \(trimmedString)")
result[trimmedString] = "Valor" // Asignación de prueba
}
} else {
let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (números): \(trimmedString)")
result[trimmedString] = "Valor"
}
}
}
print("Coordenadas finales después de filtrar: \(result)")
return result
}
}
}
Create the QRCode
CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator;
f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding];
f.correctionLevel = @"Q"; // increase level
CIImage *qrcode = f.outputImage;
Overlay the icon
CIImage *icon = [CIImage imageWithURL:url];
CGAffineTransform *t = CGAffineTransformMakeTranslation(
(qrcode.extent.width-icon.extent.width)/2.0,
(qrcode.extent.height-icon.extent.height)/2.0);
icon = [icon imageByApplyingTransform:t];
qrcode = [icon imageByCompositingOver:qrcode];
Round off the corners
static dispatch_once_t onceToken;
static CIWarpKernel *k;
dispatch_once(&onceToken, ^ {
k = [CIWarpKernel kernelWithFunctionName:name
fromMetalLibraryData:metalLibData()
error:nil];
});
CGRect iExtent = image.extent;
qrcode = [k applyWithExtent:qrcode.extent
roiCallback:^CGRect(int i, CGRect r) {
return CGRectInset(r, -radius, -radius); }
inputImage:qrcode
arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]];
…and this code for the kernel should go in a separate .ci.metal source file:
float2 bend_corners (float4 extent, float s, destination dest)
{
float2 p, dc = dest.coord();
float ratio = 1.0;
// Round lower left corner
p = float2(extent.x+s,extent.y+s);
if (dc.x < p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round lower right corner
p = float2(extent.x+extent.z-s, extent.y+s);
if (dc.x > p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper left corner
p = float2(extent.x+s,extent.y+extent.w-s);
if (dc.x < p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper right corner
p = float2(extent.x+extent.z-s, extent.y+extent.w-s);
if (dc.x > p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
return dc;
}
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
I use unity 2020.3.48f1 to develop a game; trying to implement Apple Services integration I use Apple unity plugins(https://github.com/apple/unityplugins) Using latest version of unity plugins I getting error in Unity project after plugin import It say "Not allowed platform VisionOS" When I tryed to use older version of the plugins I getting error on runtime when calling "var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();" in line 42 it drop EXC_BAD_ACCESS(code=257, address=0x0000...) error I tryed to use different commits from official repositorys and even custom branches of apple unity plugins like (https://github.com/muZZkat/unityplugins/tree/muzzkat/fix-fetch-items) but it did not help
There is whole my script which trying to use apple unuity plugins
using System.Threading.Tasks;
using UnityEngine;
using System.Collections;
using System;
using Apple.GameKit;
using UnityEngine.UI;
public class TheScript : MonoBehaviour
{
[SerializeField]
InputField otp;
string Signature;
string TeamPlayerID;
string Salt;
string PublicKeyUrl;
string Timestamp;
void Start()
{
StartCoroutine(Call());
}
private IEnumerator Call()
{
yield return new WaitForSeconds(5);
Login();
}
public async Task Login()
{
otp.text += $"Loginig... ";
if (!Apple.GameKit.GKLocalPlayer.Local.IsAuthenticated)
{
try
{
var player = await GKLocalPlayer.Authenticate();
var localPlayer = GKLocalPlayer.Local;
TeamPlayerID = localPlayer.TeamPlayerId;
var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();
Signature = Convert.ToBase64String(fetchItemsResponse.GetSignature());
PublicKeyUrl = fetchItemsResponse.PublicKeyUrl;
otp.text += $"Team Player ID: {TeamPlayerID} ";
otp.text += $"PublicKeyUrl: {PublicKeyUrl} ";
}
catch(Exception e)
{
otp.text += $"Error: " + e.Message;
}
}
else
{
Debug.Log("AppleGameCenter player already logged in.");
}
}
async Task SignInWithAppleGameCenterAsync(string signature, string teamPlayerId, string publicKeyURL, string salt, ulong timestamp)
{
}
}
So, I'm done with GPTK and decided to delete it. The only thing I installed was brew -v install apple/apple/game-porting-toolkit and the external libraries from the ditto command. Now, I tried to remove it, but even after
brew remove game-porting-toolkit
brew autoremove
all of the dependencies installed with brew are still there. The most obvious was game-porting-toolkit-compiler, but even after removing this there are so many libraries that are now orphaned and it's just impossible to manually identify those. Is there a way or is the easiest way to simply uninstall Homebrew completely and reinstall it again?
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
Topic:
Graphics & Games
SubTopic:
General
Hi
I've noticed one issue in Metal HUD, but I'm not sure if it is a bug in the Metal HUD or if there is a purpose for this behavior.
Metal HUD has an option to send the data to system log in raw format where the numbers are like
metal-HUD: ,,,,,...,
https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance/
If the HUD is displayed, it works just fine, but it seems that when the HUD is hidden (with shift-F9), it still send the data to system log, but the numbers are the same all the time and are not updated while is still being updated.
I would expect that it should log the data no matter if the HUD is displayed or not, this of course leads to incorrect FPS calculations
Here is an example of the system log entries when the HUD is not visible:
Topic:
Graphics & Games
SubTopic:
General
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events.
Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame.
We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames.
This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K.
When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution.
Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window?
If it cannot work that way, what is an alternative?
I have code that captures a window and displays a cropped image. The problem is 2 fold. Kit doesn't seem to allow to modify stop and recapture image in window mode to capture a portion of the screen.
So this makes me having to crop and display the cropped image via a published variable. This all works find. But seems to stop after some time.
Using an M1 16gig ram. program is taking less than 100meg of mem with 40-70%cpu as the crow flies.
printing captured success in debug mode and sometimes frame isn't valid so guarding against it.
any ideas on how to improve my strategy?
We are seeing crashes in Xcode organizer. So far we are not able to reproduce them locally. They affect multiple app releases (some older, built with Xcode 15.x and newer built with Xcode 16.0). They only affect iOS 18.5.
Is there anything that changed in latest iOS? It's hard to tell what exactly is causing this crash because setting symbolic breakpoint on CA::Render::Image::new_image(unsigned int, unsigned int, unsigned int, unsigned int, CGColorSpace*, void const*, unsigned long const*, void (*)(void const*, void*), void*) triggers this breakpoint all the time, but not necessarily with exactly the previous stack frames matching the crash report.
Is it a known issue?
crash.crash
Thank you.