Dobrý den, koupil jsem si IPhone 13 a na displeji přesně uprostřed tlačítka hlasitosti (+) je tečka. Vypadá to jako kdyby to byla nějaká funkce či co, ale na nic jsem nepřišel. Nevím jestli je to vada nebo ne.
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I plan to create a simple motion graphics software for macOS that animates text, basic shapes, and handles audio. I'll use SwiftUI for the UI.
What are the commonly used technologies for rendering animated graphics? Core Animation is suitable for UI animations but not for exporting and controlling UI animations.
Basic requirements:
Timeline user interface
Animation of text and basic shapes
Viewer in SwiftUI GUI with transport control (play, pause, scrub, …)
Export to video file
Is Metal or Core Graphics typically used directly? I want to keep it as simple as possible.
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it.
But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue...
CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,`
NULL,
shouldInterpolate,
kCGRenderingIntentDefault);
Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats:
Source format: (derived from sourceCGImage)
bitsPerComponent = 8
bitsPerPixel = 32
colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile)
bitmapInfo = kCGBitmapByteOrderDefault
version = 0
decode = 0x000060000147f780
renderingIntent = kCGRenderingIntentDefault
Destination format:
bitsPerComponent = 8
bitsPerPixel = 24
colorSpace = (DeviceRBG)
bitmapInfo = 8197
version = 0
decode = 0x0000000000000000
renderingIntent = kCGRenderingIntentDefault
But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
__builtin_ia32_cvtb2mask512() is the GNU C builtin for vpmovb2m k, zmm.
The Intel intrinsic for it is _mm512_movepi8_mask.
It extracts the most-significant bit from each byte, producing an integer mask.
The SSE2 and AVX2 instructions pmovmskb and vpmovmskb do the same thing for 16 or 32-byte vectors, producing the mask in a GPR instead of an AVX-512 mask register. (_mm_movemask_epi8 and _mm256_movemask_epi8).
I would like an implementation for ARM that is faster than below
I would like an implementation for ARM NEON
I would like an implementation for ARM SVE
I have attached a basic scalar implementation in C. For those trying to implement this in ARM, we care about the high bit, but each byte's high bit (in a 128bit vector), can be easily shifted to the low bit using the ARM NEON intrinsic: vshrq_n_u8(). Note that I would prefer not to store the bitmap to memory, it should just be the return value of the function similar to the following function.
#define _(n) __attribute((vector_size(1<<n),aligned(1)))
typedef char V _(6); // 64 bytes, 512 bits
typedef unsigned long U;
#undef _
U generic_cvtb2mask512(V v) {
U mask=0;int i=0;
while(i<64){
// shift mask by 1 and OR with MSB of v[i] byte
mask=(mask<<1)|((v[i]&0x80)>>7);
i++;}
return mask;
}
This is also a dup of : https://stackoverflow.com/questions/79225312
Hi,
wanted to test if possible to use Mesa3D OGLon12+D3DMetal 2b3 to get GL>4.1 support on windows apps via D3D12Metal..
using simple wglgears.c app (similar glxgears) and running like:
GALLIUM_DRIVER=d3d12 wine64 wglgears64 -info
with overridden opengl32.dll using contents from:
https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z
I get:
[D3DMetal:LOG:5E53] Unsupported API: CreateCommandQueue1
caused by:
https://gitlab.freedesktop.org/mesa/mesa/-/commit/c022c9603d500b59ff5e6f93c8a214d1785ab20a
API:
https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nf-d3d12-id3d12device9-createcommandqueue1
note setup is correct as using:
GALLIUM_DRIVER=llvmpipe wine64 wglgears64 -info
I get:
GL_RENDERER = llvmpipe (LLVM 19.1.3, 128 bits)
GL_VERSION = 4.5 (Compatibility Profile) Mesa 24.3.0-rc1 (git-85ba713d76)
GL_VENDOR = Mesa
GL_EXTENSIONS = GL_ARB_multisample GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract
r GL_EXT_texture.. etc..
I’m trying to build my project using Unreal Engine 5.4 for iOS. I use SocketIO to connect to the backend. The plugin SocketIOClient version 2.8.0, is used for this.
When trying to connect to the socket, a crash occurs. This only happens on iOS, only in Distribution builds, and only on Unreal 5.4. There are no problems on Unreal 5.2. Callstack:
crashlog.crash
The callstack may be slightly different, but the problem is always when allocating or deallocating memory. I suspect that this may be a race condition and is related to some peculiarities of working with memory on iOS. There are also several similar issues in the plugin repository, for example this one.
I tried using other versions of plugin, other versions of xcode, tried to build socket io using both c++20 and c++17, nothing helps. Does anyone know what can be done about this?
Dear Apple Support Team,
I recently purchased an iPad Pro 2022 and updated it to iOS 18.2. However, I am experiencing an issue while using Call of Duty Mobile. The Game Mode activates randomly and sometimes does not activate at all. Additionally, when the Game Mode is on, the game crashes unexpectedly, causing an unstable experience. I kindly request that you address this issue in upcoming iOS updates.
Thank you for your attention and support.
Best regards,
[samadBg]
I want to implement the ability to apply Lightroom Preset (.xmp file) to an image in my app, but am running into difficulties. How can I configure things like color grading, curve, etc. in Swift?
I am trying to install the game-porting-toolkit using
brew -v install apple/apple/game-porting-toolkit
but this fails each time because of a dependency on a deprecated openssl version:
Fetching dependencies for apple/apple/game-porting-toolkit: cmake, ninja, apple/apple/game-porting-toolkit-compiler, openssl1.1
...
...
Error: openssl@1.1 has been disabled because it is not supported upstream! It was disabled on 2024-10-24.
Is there a way to override this dependency or use a newer version of openssl for the check?
Hi guys! Is there any way to get a frame at certain time? I'm writing plug-in and want to use 2 frames before and 2 frames after current frame in order to render final image.
Sample project from: https://developer.apple.com/documentation/RealityKit/guided-capture-sample was fine with beta 3.
In beta 4, getting these errors:
Generic struct 'ObservedObject' requires that 'ObjectCaptureSession' conform to 'ObservableObject'
Does anyone have a fix?
Thanks
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS.
The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS.
I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
I want to know why there is no video frame data when RePlaykit enters the background and then enters the foreground?
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
Guys,
In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for.
However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper.
This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey.
I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
Hi,
When using a High Definition Display, is there a way to render at exactly the target resolution on the physical screen?
My understanding is that the default behavior is to render to a backing store with a resolution (in pixels) which can be twice the size of the logical resolution (in points). Then we let the OS handle the down-scaling to the actual target resolution on the screen. This is all nice for non-graphics intensive apps, but it means that my game will render at a higher resolution than needed, which seems like an obvious loss of performance.
My expectation is that, for graphics intensive application such as games, we should be able to query and render to the final resolution on the display. Can it / should it be done?
Thank you for your help :)
FYI I did find a document which explains how to setup your CAMetalLayer to render at a custom resolution. I suspect that this may be what I have to do?
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
We've recently updated a view which displays photos via a CoreImage chain from a NSOpenGLView subclass to a NSView with a backing CAMetalLayer.
Things are mostly working fine, but we occasionally hit a deadlock involving CALayer and CIMetalCommandQueue. I've made a spindump, it appears none of our code is involved in the locked threads. Despite this, I'm assuming the problem is ours 😅
I saw the mention in the CAMetalLayer documentation about releasing drawables with an @autoreleasepool in drawRect, we have done this and I can't find any places we're retaining a drawable outside drawRect.
https://developer.apple.com/documentation/quartzcore/cametallayer?language=objc
I am seeing this on macOS 15.0.1, M2 Max MacBookPro. We haven't seen it on macOS 14.x but it may be luck as we have not tested much on that OS.
I don't know how to move forward debugging this, any help much appreciated!
The two locking threads in the spindump are MainThread and CI::RenderCompletionQueue.
Thread 0xb3b0f8 DispatchQueue "com.apple.main-thread"(1)
…
CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 364 (QuartzCore + 178484) [0x1a5dba934]
invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 176 (QuartzCore + 1782676) [0x1a5f42394]
-[CALayer(CALayerPrivate) _copyRenderLayer:layerFlags:commitFlags:] + 720 (QuartzCore + 179304) [0x1a5dbac68]
-[NSImage(CALayerSupport) CA_copyRenderValue] + 52 (AppKit + 1517960) [0x1a0fe0988]
-[NSImage CGImageForProposedRect:context:hints:] + 440 (AppKit + 1246368) [0x1a0f9e4a0]
-[NSImage _usingBestRepresentationForRect:context:hints:body:] + 148 (AppKit + 1247980) [0x1a0f9eaec]
__48-[NSImage CGImageForProposedRect:context:hints:]_block_invoke + 80 (AppKit + 1248792) [0x1a0f9ee18]
-[NSCIImageRep CGImageForProposedRect:context:hints:] + 112 (AppKit + 6200292) [0x1a1457be4]
+[CIContext contextWithOptions:] + 40 (CoreImage + 549532) [0x1a8df129c]
-[CIContext initWithOptions:] + 588 (CoreImage + 65744) [0x1a8d7b0d0]
+[CIContext(Internal) internalContextWithMTLDevice:options:] + 76 (CoreImage + 66568) [0x1a8d7b408]
CIMetalCommandQueueCreate + 52 (CoreImage + 66692) [0x1a8d7b484]
-[CaptureMTLDevice newCommandQueue] + 168 (GPUToolsCapture + 130200) [0x1029e7c98]
-[CaptureMTLCommandQueue initWithBaseObject:captureDevice:] + 204 (GPUToolsCapture + 799812) [0x102a8b444]
GTMTLGuestAppClientAddMTLCommandQueueInfo + 108 (GPUToolsCapture + 313572) [0x102a148e4]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb41b08 - part of a deadlock)
and
Thread 0xb41b08 DispatchQueue "CI::RenderCompletionQueue"(535) 1000 samples (1-1000) priority 46 (base 46)
start_wqthread + 8 (libsystem_pthread.dylib + 52464) [0x1035f4cf0]
_pthread_wqthread + 288 (libsystem_pthread.dylib + 20736) [0x1035ed100]
_dispatch_workloop_worker_thread + 580 (libdispatch.dylib + 129956) [0x1026afba4]
_dispatch_root_queue_drain_deferred_wlh + 652 (libdispatch.dylib + 133360) [0x1026b08f0]
_dispatch_lane_invoke + 468 (libdispatch.dylib + 68516) [0x1026a0ba4]
_dispatch_lane_serial_drain + 860 (libdispatch.dylib + 64160) [0x10269faa0]
_dispatch_client_callout + 20 (libdispatch.dylib + 26788) [0x1026968a4]
_dispatch_call_block_and_release + 32 (libdispatch.dylib + 19300) [0x102694b64]
CI::Object::unref() const + 120 (CoreImage + 35360) [0x1a8d73a20]
CI::MetalContext::~MetalContext() + 16 (CoreImage + 192260) [0x1a8d99f04]
CI::MetalContext::~MetalContext() + 236 (CoreImage + 192536) [0x1a8d9a018]
-[CaptureMTLCommandQueue dealloc] + 44 (GPUToolsCapture + 797916) [0x102a8acdc]
GTMTLGuestAppClientRemoveMTLCommandQueueInfo + 236 (GPUToolsCapture + 314240) [0x102a14b80]
GTMTLGuestAppClient_allCaptureObjectsUnsafe + 392 (GPUToolsCapture + 298776) [0x102a10f18]
AllMetalLayers + 64 (GPUToolsCapture + 518224) [0x102a46850]
MakeLayerInfos + 320 (GPUToolsCapture + 518608) [0x102a469d0]
-[CALayer frame] + 88 (QuartzCore + 74624) [0x1a5da1380]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb3b0f8 - part of a deadlock)
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine.
Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer.
However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?