Hi Darrin,
Thanks again for you answers, very appreciated. Must apologize for some of my earlier comments and attempts at optimizing this plugin, as am not a CS major, approach the idea of optimization from a more app-user centric view, than the programmer’s reality. Forgive me if some of my suggestions are dead-ends, foolish, ignorant, or not valid in the available context of this developer API... For example, my trying to use an r32Float type for MTL instead of rgba32Float was from thinking less channel data would be faster.
Thanks for validating that the number ranges am using in the sin() function are within an acceptable margin of floating precision, means that as long as the texture data is populated with values within this range, feel more confident that the results are correct. What you describe about these numbers becoming ever more spread out farther from zero, and closer when near, makes sense.
Understood, everything in Motion/FCP eventually has to end up on the GPU. Guess the big issue for me, is how to get the data to the GPU as efficiently as possible. Current attempts at going by way of texture is generally way too slow for fast, interactive use... Reminds me of the “old days” of working creatively on Silicon Graphics refrigerators compositing video in 8-bit/channel with limited texture memory, bus with limited bandwidth, etc. Meant interactivity was SLOW! Even it was cutting edge technology at the time!
Sure everything’s gotten faster and the technology’s improved significantly, but this feels like am back in those days, decades later. Feels like a similar issue as back then, not having texture memory, or enough of it, to render what the CPU was wanting to send it...
So have tried a few of your suggestions, but these raised more questions as well, off the top of my head, this is what’s coming up:
- Am using an Intel MBP Pro 2020, and when looking into using Float16 data, get a message saying this type in SWIFT on MacOS isn’t (yet) supported, meaning, that I can’t directly generate half-float data unless using an M compatible chip. That’s a problem... Found a few solutions to the problem, which import the Accelerate framework, making use of vImageConvert_PlanarFtoPlanar16F to convert an array from float to half. (Other solutions which directly convert from float to half at a bytes level are a little too much for me to work through at this time) This seems to work, but then, brings up another question entirely, of “why bother doing this?”
In the SDK GradientCheckerboard example, the plugin creates a MTLPixelFormat.rgba32Float and then replaces the texture data, which to me means, that it’s replacing the iosurface data inside the FxImageTile wrapper. If it’s doing this, isn’t it then having to translate the float data to half to be compatible with the iosurface anyway? Q: “Is there any benefit to my doing this manually ahead of replacing the texture data?” As this is how I approached my own plugin by reinterpreting the example code in Swift from Obj-C.
In other words, would it really make any difference to go this route of replacing half float data in iosurface directly vs sending float?
- Started fresh again with a new plugin template from the latest FxPlug SDK 4.2.4, which have modified to be an FxGenerator based on evaluating the code from the GradientCheckerboard example, and trying new approaches from scratch, including sending half float data by converting a float array (using the Accelerate framework) into MTLPixelFormat.rgba16Float and replacing the texture. But no matter if am sending half into rgba16Float, or float into rgba32Float, am getting a short series of errors such as the following one turning up, when I build and run the plugin with the scheme set to launch Motion instead of the wrapper app, no idea why (yet):
2022-05-03 10:35:47.677780-0700 G Word XPC Service[8941:670499] Got an unexpected pixel format in the IOSurface: 0x52476841
This error is generated from the MetalDeviceCache.swift file, where it assesses the MTLPixelFormat and sends this error is the format is rgba32Float for example, but then why am I getting this error if am creating a rgba16Float format texture?
Confusing matters even more, went back to my original test generator plugin, and tried building and running it with Motion set as the app in the scheme, to see if the same error were happening, and in this manner, the plugin won’t even build, rather stops at the breakpoint 1.1 on line 122: “let deviceCache = MetalDeviceCache.deviceCache” when the plugin is used on a layer in Motion... But when I build and run it using the wrapper app itself, launch Motion separately, then the plugin renders the image as expected to!
Will continue experimenting... As would be great to get this working, even if it means accepting the slower speed, which am sure would be improved by upgrading to a faster computer with M chips later, only, would rather get the code working optimally first.
On another note, a simpler question...
- Is there any way to access the width and height parameter data inside the generator plugin? For example, applying this generator to my composition, would like to be able to read width and height data it uses, so to manage the texture size independently, rather than rely on the destinationImage’s tile, image bounds, and having to recalculate them, as generators don’t have sourceImage info?
Thanks again for you time and willingness to help. And apologies in advance for my lack of experience in developing and this SDK.