Post not yet marked as solved
Does anyone know if it's possible to control WHEN the video captureoutput loop collects the camera image? I'd like to synchronize multiple iOS cameras to within a certain sub ms precision for an application that covers fast action. And this particular use would have to sync the images within a certain tolerance of time under 1ms. Think of it as a network based gen-lock signal.Thanks,
Post not yet marked as solved
I'm trying to figure out if Metal compute is a viable solution for an application. I'm finding that even on an iPhone6sPlus with A9, an empty compute encoder executing in loop never beats 2.5ms in execution. The simplest test I could concoct was:- (void)runtest { id <MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer]; id <MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoder]; [computeEncoder endEncoding]; [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull) { runcounter ++ ; NSLog(@"count: %d", runcounter); [self runtest]; }]; [commandBuffer commit];}The loop was run for 10seconds. I did a simple division of runs/seconds. No other work was being done by the app (display loop, etc). The breakdown was 2.5ms between iterations.For comparison, something like a NEON sum of 1024 numbers avg'd 0.04 ms and of course executed immediately.I realize this doesn't mean it wastes 2.5ms of resources and could just be scheduling, but for very low latency app requirements (camera processing) it does mean that NEON can process immediately but Metal can not. Can someone confirm this finding or correct the test? Thanks.
Post not yet marked as solved
Does anyone have an example of how to perform an optimized write to an MTLTexture that is prive and internally tiled for performance?A simple fragment that writes RGBA8 to full screen tex may take 800us or so. I have a compute function that does the same into a buffer in 150us or so with its own memory performant tiling (same overall memory size). I was hoping there would be a way to write in a tile friendly way directly to the tex (if internal tile size, etc are known). Thanks,
Post not yet marked as solved
I've been looking hard and can't find an example of using fragment/tile imageblocks WITHOUT it writing to the framebuffer color attachments. The apple demo (***** lights) seems to write to the color attachments for some reason, introducing the performance penalty that imageblocks seem to have been designed to overcome (pixel stores have been verified visually and through debugger). I've found no documentation that explicitly shows how to pass through the pipe using imageblock memory only.Does anyone have a link/sample that explicitly demonstrates going from vertex->fragment->compute without ever writing to an attachment? This is for a very specific processing pipeline that is purpose designed for speed and should not write pixels.Thanks,
Post not yet marked as solved
So I cloned 4 volumes into a new 1TB APFS Container of 4 APFS Volumes. 1 Terabyte's worth of data to brand new external SSD. After unmounting and remounting the drive: NO DISKS. diskutil listed the container in ERROR. And then Disk Utility automagically formatted a new single volume out of it.I trusted a little bit eagerly that the claim of hundreds of millions of devices tested upon had a little bit more weight. And unfortunately diskutil and a lack of documentation left few options but to reformat back into trusty old HFS+.Any reason in particular ZFS was so flawed it couldn't be relied upon? In particular it brings existing tools, deduplication, and data checksumming. That whole container thing just collapsed on me with a nice single error message. And Recovery repair did nothing to illuminate or help
Post not yet marked as solved
I'd love to be shown that I'm in error, but I've been following the Broadcast Extension since beta 1 (now at beta 4). Creating a Broadcast Upload and Broadcast UI extension from template has resulted in zero success.Can someone please prove me wrong?I can't imagine that I'm the only one seeing this. Uploading an app that has these extensions will - AFTER A REBOOT - produce the named Upload extension in the Screen Recording dialogue. However it never presents a Setup UI nor can the Sampler be debugged.Again, please anyone who has gotten this to work I'd love to see a working case. I've tried on all betas. And iPad and iPhone devices.Thanks,