Posts

Post not yet marked as solved
3 Replies
542 Views
Hello, A vendor recently updated their SDK and I'm trying to update my project to use the new dylib. I'm not able to use the dylib out of the box due to linking problems, so fired up install_name_tool to update the id to properly work in my situation (similar thing worked with their prior release.) However, with this build I'm getting the error: % install_name_tool -id @rpath/libblahblahblah.dylib libblahblahblah.dylib /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/install_name_tool: fatal error: file not in an order that can be processed (link edit information does not fill the __LINKEDIT segment): libblahblahblah.dylib (for architecture x86_64) Google isn't returning much beyond explanations of the macho file layout. Is there a simple way to alter the dylib locally to address this? Is there something I can suggest the vendor does upstream? Thanks, mike
Posted Last updated
.
Post marked as solved
1 Replies
455 Views
I know this is a long shot, but I just found out I wont be available at 2:40 today for my lab appointment in the HDR and EDR lab. Is there a way to request an earlier time (assuming there's availability)?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
I'm trying unsuccesfully to upgrade our CIImage based video display pipeline to wide color and HDR. The problem is finding a >8bit pixelformat supported by CIImage imageWithIOSurface: or imageWithCVImageBuffer:for all but the most common formats (32, '2vuy', etc) i get the following error on console:[api] [CIImage initWithIOSurface:options:] failed because surface format was x422.I've tried kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange (suggested format for HEVC wide decomp according to avf wwdc video) and numerous others.Are supported pixelformats documented anywhere?
Posted Last updated
.
Post not yet marked as solved
1 Replies
460 Views
Hello,After a recent update of Catalina, our metal app stopped using the discrete gpu (instead scheduling all work on the macbook's integrated gpu). I've confirmed that we are setting up metal with the discrete gpu's device.Is this a known issue? Has the API changed somehow to trigger automatic graphics switching?mike
Posted Last updated
.
Post not yet marked as solved
0 Replies
477 Views
I'm trying to compile some compute kernel source, and force it to all run precise or fast.The docs (https://developer.apple.com/documentation/metal/mtlcompileoptions/1515914-fastmathenabled?language=objc)say of MTLCompileOptions.fastMathEnabled -- The default value is YES. A YES value also enables the high-precision variant of math functions for single-precision floating-point scalar and vector types.Is this a typo? NO would enable high precision, correct?Second, it appears I can do this with a c++ namespace in the kernel source itself. Am I correct that doing so will override the setting in MTLCompileOptions.fastMathEnabled?Can I wrap my kernel inusing namespace metal::fast // metal::preciseto get all float math to compile is one variant or the other? Will this change simple float operations as well? or just functions like clamp/saturate?Thanks,mike
Posted Last updated
.
Post not yet marked as solved
0 Replies
526 Views
I have a MTKView in a window with other views to the sides. I'm trying to animate the layout of the these controls in such a way that the MTKView changes size during the animation. However, no matter what I try, the MTKView bounds seem to only update once at the start of the animation, rather than tracking the frame change during resize. Has anyone made this work? Will I have better luck rolling my own with CAMetalLayer?mike
Posted Last updated
.
Post not yet marked as solved
0 Replies
502 Views
I have a simple fragment shader that supports sampling a texture, and tinting the output with another color. Inputs to this shader can include both rgba content (which will get tinted by the color) and single channel content which should just get colorized by the tint color.The rgba content is being encoded with pixelformat of MTLPixelFormatRGBA8Unorm. In the fragment shader, the sampled color appears to be (r,g,b,a)... al good so far.The single channel content is being encoded with pixelformat of MTLPixelFormatR8Unorm. In the fragment shader, the sampled color appears to be (red,0,0,0).Is there a way to setup the shader to behave more like opengl and splat that value to all channels (red,red,red,red)?Is there any way to determine the texture's pixel format in the shader?Do I have to set things up to have two different pipelines for these two formats?Thanks, mike.
Posted Last updated
.
Post not yet marked as solved
2 Replies
927 Views
I have a workspace that no longer works when choosing the "jump to definition" contextual menu on a highlighted item in the editor. When selected a ? briefly appears onscreen.Other projects and workspaces on this same machine work fine. If I checkout the same git repo to a new path the functionality works there. If I delete the original path and reclone my repo to the same path, it does not work.It seems something must be cached somewhere outside of my project directory, and that this cache is hosed. Any ideas?mike
Posted Last updated
.
Post not yet marked as solved
0 Replies
414 Views
Hello,I'm trying to update our pipeline based on ************ and coreimage/opengl to add support for display P3 preview. However, I can't seem to find any common pixelformat than satisfies all the requirements:> 8bit / channelYCbCr colorable to be wrapped by CIImage via +imageWithCVImageBuffer: or +imageWithCVImageBuffer:Am I missing something? Is there really no overlap between CoreImage and AVF on this? I can write a custom pixel converter, but whats the best format on the CIIMage side to convert to?mike
Posted Last updated
.
Post not yet marked as solved
5 Replies
2.1k Views
I'm trying to get things buttoned up in our app for high sierra launch, but I'm seeing discrepancies between the hardware accelerated HEVC encoder and the software one. I'm using a VTCompressionSession to encode sample to HEVC. If I've enabled hardware accelration via kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder later calls to VTSessionSetProperty to set kVTCompressionPropertyKey_AverageBitRate do nothing. The same call does produce a chnage in the output samples relative bitrate if hardware encode is not enabled.Is this expected behavior? The h264 encoder seems to have feature parity between software and hardware versions.
Posted Last updated
.