Posts

Post not yet marked as solved
3 Replies
466 Views
I need to deal very wide images, beyond the 16384-wide limit that Metal has. So, I've resorted to using Metal buffers to reorganize my pixel data. Even using a MTLTexture for output was failing due to purported issues with getBytes and texture syncronization on NVidia hardware.Anyways, below is a code snippet that works just fine for converting 32-bit RGBA data (with forced alpha) into my desired compact form. Both my input and output buffers are formatted for 32-bit RGBA data. My output buffer is actually a CVPIxelBuffer.If I change line 18 to deal with 24-bit BGR data instead -- the noted one, multiplying by 3 bytes/pixel instead of 4, all I get is a black image.I'm baffled as to why things are failing.kernel void stripe_Kernel(device const uchar *inBuffer [[ buffer(0) ]], device uchar4 *outBuffer [[ buffer(1) ]], device const ushort *imgWidth [[ buffer(2) ]], device const ushort *imgHeight [[ buffer(3) ]], device const ushort *packWidth [[ buffer(4) ]], uint2 gid [[ thread_position_in_grid ]]) { const ushort imgW = imgWidth[0]; // eg. 18000+ const ushort imgH = imgHeight[0]; // eg. 2048 const ushort packW = packWidth[0]; // eg. 1024 uint32_t posX = gid.x; uint32_t posY = gid.y; uint32_t sourceX = ((int)(posY/imgH)*packW + posX) % imgW; uint32_t sourceY = (int)(posY%imgH); uint32_t ptr = (sourceY*imgW + sourceX)*4; // change this to "*3" for 24-bit uchar4 pixel = uchar4(inBuffer[ptr],inBuffer[ptr+1],inBuffer[ptr+2],255); outBuffer[posY*packW + posX] = pixel; }I should mention that I allocate the input Buffer thusly:posix_memalign((void *)&diskFrame,0x4000,imgHeight*imgWidth*4);I've even left this as-is when dealing with 24-bit data thinking I may be having memory alignment issues on the GPU.I was previously using the Accelerate framework to convert my 24-bit source data to 32-bit source data (inBuffer) for use by Metal, but doing this conversion on the GPU should anecdotally be about 3x faster. This code could be much shorter if, say, inBuffer was also defined to be uchar4 or uint32_t, but I'm demonstrating a failure case.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
209 Views
The legacy NSCollectionView grid has some really simple behaviour when dragging items around in the grid. When dragging item(s) to a target location, the proposed drop location would animate slightly ("parting the Red Sea of adjacent items" as it were) for the proposed drop. A bonus: there wasn't a vertical insertion bar. I would assume that behind the scenes in the legacy implementation, the NSSpringLoadingDestination protocol and the view's animator instance were used to make this happen.. I managed to partly mimic this behaviour with the new NSCollectionViewGridLayout (via the aforementioned protocol/animator) and it's almost working. But I'm giving up on my approach as I can't see a means to obscure the blue vertical insertion bar right through the center of my items. That happens during my drag session when I propose at NSCollectionViewDropBefore operation (in my validate drop method) while returning anything but NSDragOperationNone. I do see that there's a layoutAttributesForInterItemGapBeforeIndexPath method on the new layouts; that might work, but I don't see any means to avoiding the insertion bar highlight. Does anyone know how to easily get this legacy behaviour with the post-OS10.11 NSCollectionView API?
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
282 Views
I have an NSCollectionView specified as both my DataSource and my Delegate. I have two issues: 1) Rather than doing the registerClass method, attempting to instead use the 3 lines of commented code with the (non-nil) protoNib means of registering with an NSCollectionView causes "theItem" to always be nil. 2) Using the class registry option, all works mostly fine. But if I remove the willDisplayItem and didEndDisplayingItem stubs, the system eats up gobs of memory on its first call to itemForRepresentedObjectAtIndexPath (with thousands of internal calls to these two stubs) and eventually crashes. Instruments shows *thousands* of 4k @autoreleasepool content items being created by AppKit. Any idea why this might be happening? (void)awakeFromNib { 		[self registerClass:[MECollectionViewItem class] forItemWithIdentifier:@"EntityItem"]; //	NSString *nibName = NSStringFromClass([MECollectionViewItem class]); //	NSNib *protoNib = [[NSNib alloc] initWithNibNamed:nibName bundle:nil]; //	[self registerNib:protoNib forItemWithIdentifier:@"EntityItem"]; 		 		__weak typeof(self) weakSelf = self; 		[self setDelegate:weakSelf]; 		[self setDataSource:weakSelf]; 		... } (MECollectionViewItem *)collectionView:(NSCollectionView *)collectionView 		 itemForRepresentedObjectAtIndexPath:(NSIndexPath *)indexPath; { 				MECollectionViewItem *theItem = [self makeItemWithIdentifier:@"EntityItem" 																												forIndexPath:indexPath]; 				return theItem; } (void)collectionView:(NSCollectionView *)collectionView 			willDisplayItem:(NSCollectionViewItem *)item forRepresentedObjectAtIndexPath:(NSIndexPath *)indexPath { } (void)collectionView:(NSCollectionView *)collectionView didEndDisplayingItem:(nonnull NSCollectionViewItem *)item forRepresentedObjectAtIndexPath:(nonnull NSIndexPath *)indexPath { }
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
205 Views
I have a PHP script running on a website which logs the number of times access is associated with a key:http://www.mywebsite.com/admin-query.php?key=8469f8847b8b4d05364ae400d4071d93If the URL is incomplete and I go to manually alter the URL in the top bar of Safari 13.0.5 (on MacOSX 10.14.6) , when the URL matches what was previously entered, before I even press return, I notice that the script has been run and my access tally associated with that key in MySQL database has been incremented.To be clear... altering the "admin-query" in the URL to "admin-query1" and then back again without pressing return causes the "Top Hit" and "Google Suggestions" popup to appear, at which point the PHP script is run.How can that be so?I've tested with Chrome 81.0.4044.122, and didn't see the same result.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
224 Views
I'm curious to know if anyone is aware of a developing standard in including real-time DMX lighting control data (ArtNet/E1.31) as a track to an existing Quicktime movie?At the AVFoundation level, rather than just pulling audio/video samples from a running AVPlayer, one would be be pulling this control metadata and sending it across the DMX network. One would likely using AVMutableMovie to effectively merge a basic audio/video tracked QTMovie with timed metadata. In actuality it would be a sample reference file linking to the AV-based movie file and would itself contain the timed metadata.I am aware of Tim Monroe's "Editing Movies in AVFoundation" (Session 506, WWDC 2015) where he melded his GPS location data whilst inline skating around Oakland & Boston with headcam videos over the years. I've downloaded the "AVMovieEditor" sample code (circa 2015) mentioned therein, but it's more than a tad dated.As well, "Harnessing Metadata in Audiovisual Media”, WWDC 2014" with the AVCaptureLocation and AVTimedAnnotationWriter sample code.I don't see an existing AVMediaType that would be a good fit for this lighting control data. I guess one can just invent their own data type, no?-- Bruce.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
2 Replies
576 Views
I'm attempting to obtain the progress on a QTMovieModernizer instance. Below is my non-functional code snippet, which returns TRUE upon success. I do run this on a background thread to begin with.I'm unclear as to what my issue is as "totalUnits" and "doneUnits" are always zero. I should (somehow) be attaching the NSProgress instance to the mystery thread that the modernizer is running on, but I can't do so. Any help would be appreciated. QTMovieModernizer *modernizer = [[QTMovieModernizer alloc] initWithSourceURL:inputURL destinationURL:outputURL]; __block BOOL doneModernizing = NO; __block NSError *modernizerError = nil; [modernizer modernizeWithCompletionHandler:^{ modernizerError = [self->modernizer error]; doneModernizing = YES; }]; NSProgress *curProgress = [NSProgress currentProgress]; while (!doneModernizing) { usleep(999999); int64_t totalUnits = [curProgress totalUnitCount]; int64_t doneUnits = [curProgress completedUnitCount]; if (totalUnits>0) NSLog(@"Total work done: %2.2f %",(double)doneUnits*100/(double)totalUnits); } if (modernizerError!=nil) { NSLog(@"Modernizer error %@",modernizerError.description); return NO; } return YES;
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
799 Views
I'm trying to render to a MTLTexture within an MTKView and then pull the entire region back off the GPU and have an alpha pixel values of clear RGBA=0x00000000 really come off the GPU as just that -- in those regions where I haven't drawn anything.Instead, all un-rendered areas come off as bright red, which is typical for how Metal renders areas on-screen that have otherwise not been rendered to by a vertex/fragment shader.However, I've narrowed it down to this test case -- in which I don't actually render anything and send a clear texture on a round trip to the GPU and back:I create my own texture ('clearTexture') with 0x00000000 pixel valuesThis texture ends up the GPU (MTLStorageModeManaged)I then pull that texture off the GPU ('testTexture')I grab that texture's pixel bytesWhat do I notice? Pixel values of 0x0000ffff (pure red)I'm baffled. Any help would be appreciated (and Happy New Year!).Here's the setup:CGSize textureSize = CGSizeMake(1920,1080); // create metalTextureDescriptor MTLTextureDescriptor *metalTextureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm width:textureSize.width height:textureSize.height mipmapped:NO]; metalTextureDescriptor.storageMode = MTLStorageModeManaged; metalTextureDescriptor.usage = MTLTextureUsageUnknown; // create clear Texture vector_uchar4 clearPixel = {0x00,0x00,0x00,0x00}; id<MTLTexture> clearTexture = [self fillTextureOfSize:textureSize withPixel:clearPixel]; // pull texture off GPU id<MTLCommandBuffer> commandBuffer = [metalCommandQueue commandBuffer]; id<MTLTexture> testTexture = [self pullTextureOffGPU:clearTexture withCommandBuffer:commandBuffer]; // a bunch of other rendering (I leave 'testTexture' untouched) [commandBuffer commit]; [commandBuffer waitUntilCompleted]; // I then check out the pixels [self debugTexture:testTexture];At this point, when I break in the debug texture routine (below), the pixel bytes seen are 0x0000ffff and not the expected 0x00000000.-(id<MTLTexture>)fillTextureOfSize:(CGSize)size withPixel:(vector_uchar4)pixel { id<MTLTexture> texture = [self.device newTextureWithDescriptor:metalTextureDescriptor]; NSUInteger pixelCount = size.width*size.height; vector_uchar4 *buff = malloc(pixelCount*sizeof(vector_uchar4)); for (NSUInteger i=0; i<pixelCount; i++) buff[i]=pixel; [blackTexture replaceRegion:MTLRegionMake2D(0, 0, size.width, size.height) mipmapLevel:0 withBytes:buff bytesPerRow:size.width*sizeof(vector_uchar4)]; free(buff); return texture; } -(id<MTLTexture>) pullTextureOffGPU:(id<MTLTexture>)inputTexture withCommandBuffer:(id<MTLCommandBuffer>)commandBuffer { id<MTLTexture> retTexture = [self.device newTextureWithDescriptor:metalTextureDescriptor]; if (retTexture) { id<MTLBlitCommandEncoder> blit = [commandBuffer blitCommandEncoder]; [blit copyFromTexture:inputTexture sourceSlice:0 sourceLevel:0 sourceOrigin:MTLOriginMake(0, 0, 0) sourceSize:MTLSizeMake(inputTexture.width,inputTexture.height,1) toTexture:retTexture destinationSlice:0 destinationLevel:0 destinationOrigin:MTLOriginMake(0, 0, 0)]; [blit synchronizeTexture:retTexture slice:0 level:0]; [blit endEncoding]; } return retTexture; } -(void)debugTexture:(id<MTLTexture>)theTexture { NSInteger width = theTexture.width; NSInteger height = theTexture.height; void *buffer = malloc(width*height*4); [theTexture getBytes:buffer bytesPerRow:width*4 fromRegion:MTLRegionMake2D(0,0,width,height) mipmapLevel:0]; free(buffer); // break here to examine buffer }
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
7 Replies
1.8k Views
I have an occasional issue with my MTKView renderer stalling on obtaining a currentRenderPassDescriptor for 1.0s. According to the docs, this is either due the view's device not being set (it is) or there are no drawables available.If there are no drawables available, I don't see a means of just immediately bailing or skipping that video frame. The render loop will stall for 1.0s.Is there a workaround for this?. Any help would be appreciated.My workflow is a bunch of kernel shader work then one final vertex shader. I could do the drawing of the final shader onto my own texture (instead of using the currentPassDescriptor), then hoodwink that texture into the view's currentDrawable -- but in the obtaining of that drawable we're back to the same stalling situation.Should I get rid of MTKView entirely and fall back to using a CAMetalLayer instead? Again, I suspect the same stalling issues will arise. Is there a way to set the maximumDrawableCount on an MTKView like there is on CAMetalLayer?I'm a little baffled as, according the Metal System Trace, my work is invariably completed under 5.0ms per frame on an iMac 2015 R9 M395.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
2 Replies
744 Views
In my MTKView render loop, I'm implementing a triple-buffering scheme which transitions between two videos (so, dual triple-buffering). This is done with a dispatch_semaphore_create(3);As the CVPixelBuffer frames are arriving from my pair of AVPlayers, I convert them to MTLTextures with CVMetalTextureCacheCreateTextureFromImage.I'm currently including the underlying CVMetalTextureCacheRef cache to the transitional buffers. So, there are 6 such caches that i rotate through.Is this necessary? Overkill?Is a call to CVMetalTextureCacheCreateTextureFromImage thread-safe? If not, a thought I had was to enclose the call to itin a dispatch_sync handler and grab the resulting texture, and have only one such cache on the MTKView.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
5 Replies
1.1k Views
Here's the (terse) code of my MTKView render loop.- (void)drawRect:(NSRect)dirtyRect { dispatch_semaphore_wait(self.inflightSemaphore, DISPATCH_TIME_FOREVER); [super drawRect:dirtyRect]; ... [commandBuffer addCompletedHandler:^(id buffer) { ... dispatch_semaphore_signal(self.inflightSemaphore); // being flagged here! } }With thread sanitation turned on, XCode (9.4.1) is flagging the dispatch_semaphore_signal as a 'Data Race detected'.In my 'applicationDidFinishLaunching' code, I allocate the MTKView (MonitorMTKView and DisplayMTKView) programatically and them as subviews to an NSWindow. This is flagged as a 'Heap block allocated by thread1' and 'Write of size 8 by thread 1' by the race detector. These MTKViews are setup to require manual draw calls (enableSetNeedsDisplay = NO and paused = YES).After this setup, I create my CVDisplayLink callbacks; I have these issuing the render call, and that's where the semaphore is flagged as a 'Read of Size 8 by thread 5' in a MonitorMTKView instance. I should mention that render output of DisplayMTKView is being passed down to a MonitorMTKView for further processing/display.How is this even possible? It's a semaphore!Here's some further debug info:WARNING: ThreadSanitizer: data race (pid=2876) Read of size 8 at 0x7b0c000cd4c0 by thread T35: #0 __32-[MonitorMTKView drawRect:]_block_invoke MonitorMTKView.m:225 (CDTest:x86_64+0x100241e5c) #1 _doMTLDispatch <null>:11239952 (Metal:x86_64+0x550e9) #2 _dispatch_client_callout <null>:11239952 (libdispatch.dylib:x86_64+0x1d8e) Previous write of size 8 at 0x7b0c000cd4c0 by thread T30 (mutexes: write M578848238724123704): #0 __copy_helper_block_ MonitorMTKView.m:224 (CDTest:x86_64+0x100241f5d) #1 _Block_copy <null>:11239952 (libsystem_blocks.dylib:x86_64+0x8ef) #2 -[MTKView draw] <null>:11239952 (MetalKit:x86_64+0x134d9) #3 -[DisplayMTKView drawRect:] DisplayMTKView.m:647 (CDTest:x86_64+0x1000eb774) #4 -[MTKView draw] <null>:11239952 (MetalKit:x86_64+0x134d9) #5 cvCallback DisplayCVLinkWrapper.m:228 (CDTest:x86_64+0x1001f2e1c) #6 CVDisplayLink::performIO(CVTimeStamp*) <null>:11239952 (CoreVideo:x86_64+0x35ce) Location is heap block 2018-10-14 08:49:57.169041-0700 CDTest[2876:386511] Bailed on: NO_ROTATIONAL_BUFFER of size 40 at 0x7b0c000cd4a0 allocated by thread T30: 2018-10-14 08:49:57.158746-0700 CDTest[2876:386509] Bailed on: NO_ROTATIONAL_BUFFER 2018-10-14 08:49:57.169280-0700 CDTest[2876:386515] Bailed on: NO_ROTATIONAL_BUFFER #0 malloc <null>:11239984 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x4998a) #1 _Block_copy <null>:11239984 (libsystem_blocks.dylib:x86_64+0x8b2) #2 -[MTKView draw] <null>:11239984 (MetalKit:x86_64+0x134d9) #3 -[DisplayMTKView drawRect:] DisplayMTKView.m:647 (CDTest:x86_64+0x1000eb774) #4 -[MTKView draw] <null>:11239984 (MetalKit:x86_64+0x134d9) #5 cvCallback DisplayCVLinkWrapper.m:228 (CDTest:x86_64+0x1001f2e1c) #6 CVDisplayLink::performIO(CVTimeStamp*) <null>:11239984 (CoreVideo:x86_64+0x35ce) Mutex M578848238724123704 is already destroyed. Thread T35 (tid=386576, running) is a GCD worker thread Thread T30 (tid=386513, running) created by main thread at: #0 pthread_create <null>:11240032 (libclang_rt.tsan_osx_dynamic.dylib:x86_64h+0x283ed) #1 CVDisplayLink::start() <null>:11240032 (CoreVideo:x86_64+0x268f) #2 -[DisplayItemController setupWindows] DisplayItemController.m:312 (CDTest:x86_64+0x100012b39) #3 -[DisplayItemController initWithController:withContext:] DisplayItemController.m:90 (CDTest:x86_64+0x10000d71d) #4 -[AppDelegate applicationDidFinishLaunching:] AppDelegate.m:229 (CDTest:x86_64+0x100208d87) #5 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ <null>:11240032 (CoreFoundation:x86_64h+0x9aedb) #6 start <null>:11240032 (libdyld.dylib:x86_64+0x1014) SUMMARY: ThreadSanitizer: data race MonitorMTKView.m:225 in __32-[MonitorMTKView drawRect:]_block_invoke
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
480 Views
Here I was getting all hyped about pulling off-screen Metal textures out of SpriteKit after watching the session 609 video from WWDC2017. This was over a year ago!And yet there are absolutely no overview docs on SKRenderer and there is no sample code either.https://developer.apple.com/documentation/spritekit/skrenderer?language=objcI find this very odd indeed. Does anyone here have any insight on this class, its docs or sample code?BTW, the same goes for SKTransformNode.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
0 Replies
534 Views
At NAB in Las Vegas this year, there were Windows-based systems demonstrated that support the import of really wide ProRes files for content -- typically in the range of 24K+ pixels wide. These are typically used in display systems for stadium LED boards. A local stadium in town here has one that's over 27000 pixels wide (and 64 pixels high). Until now, this task has been performed by the use of legacy uncompressed AVI (raw) formats on Windows. Sidenote: I believe H.264 has an inherent limit of around 9000 pixels due to the internal encoding blocks.Yet Metal tops out at 16K on the handling of ProRes files. This is due to how AVFoundation shovels video frames onto the GPU directly. This 16k limit on the GPU is documented here ("Maximum 2D texture width and height"): https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdfI find it very ironic that Apple's own ProRes codec won't support a given pixel size/format and yet that same file is supported on the Windows side. Even the iMacPro won't support it, but a Windows box running the Vega 56 or 64 GPU will.In short, this lack of very wide video frame support is shuttering an entire demographic of content creators out of using high-end Macs in their workflow. Has anyone else had this issue or can shed some light on the rationale for the 16k limit?-- bp
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
2 Replies
556 Views
I'm trying to figure out the maximum usable size of a CVPixelBuffer. Here's my sample code: CGSize frameSize = CGSizeMake(32768,16384); NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferMetalCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; CVPixelBufferRef pxBuffer = NULL; CVReturn status = CVPixelBufferCreate( kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_32BGRA, (__bridge CFDictionaryRef)options, &pxBuffer); if (status != kCVReturnSuccess || pxBuffer==nil) NSBeep(); // error creating bufferHere's a little table of (x,y) pixel sizes and their failure status:(32768,16384) FAILS(32768,16383) OK(131072,4095) OK(131072,4096) FAILS(4095,131072) FAILS(16383,32768) FAILS(16384,32768) FAILS(16384,32767) OK(pow(2,24),pow(2,4)) OK(pow(2,24),pow(2,5)) FAILS(pow(2,24),pow(2,5)-1) OK(pow(2,4),pow(2,24)) OK(pow(2,5),pow(2,24)) FAILS(pow(2,5),pow(2,24)-1) OKI can't quite figure out the criteria for what makes for valid (extreme) dimensions of an RGBA32 CVPixelBuffer. An (M,N) size might work but then (N,M) won't.Is there documentation about for these maximum sizes? It would seem that they exceed the Metal Specs given here:https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdfwhich leads me to believe creation of a CVPixelBuffer is not a GPU-bound activity.I'm trying this on a late-2015 iMac 5K, Radeon R9-M395, OS 10.12.6.
Posted
by B_Payan.
Last updated
.
Post not yet marked as solved
1 Replies
354 Views
With iOS, there's an ability to prevent the deletion or installation of apps on the device. How is this done with tvOS? I don't see an option for this.Young "battling" siblings on a given AppleTV seem to take to deleting each others apps; how can this be prevented?
Posted
by B_Payan.
Last updated
.