Post not yet marked as solved
I have a customer who has a 2010 Mac Pro running macOS 10.14.6, this customer is trying to use my application with RAW images from their Nikon camera ( 8256 x 5504 ).The customer gets an image at the correct size, but with only 1 row of pixels, the rest is black. When I test the images on a 2012 MBPr, it works fine, or a 2015 MacBook, it also works.The customer also tried an image that is 8166 x 5302 and that works for them. When my app does it's processing, it logs as much information about the CIContext as possible, including [CIContext inputImageMaximumSize] and [CIContext outputImageMaximumSize], which both report 16384 x 16384, yet the context doesn't return the results I'd expect when the images exceed 8192 x 8192 on this customers machine.I am leaning towards the idea that the macOS is returning incorrect information and the graphic card on their Mac Pro doesn't actually support anything higher than 8192 x 8192.If it helps, here is the log from the context. priority: default
workingSpace: (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Generic HDR Profile)
workingFormat: RGBAh
downsampleQuality: High max sizes, in{16384 x 16384} out{16384 x 16384}Any ideas or suggestions?To create the context it calls [CIContext context].To create the data it calls [CIContext render:toBitmap:rowBytes:bounds:format:colorSpace:]
Post not yet marked as solved
Notarization process appears to work correctly. On macOS 10.14.5 I get the "scanned for malware" dialog. However on Catalina, the DMG is rejected and I'm having a hard time to understand why.For "spctl -a -v -t open <path>" I get: rejected
source=Insufficient ContextFor "spctl -vvv --assess --type install <path>" I get.: accepted
source=Notarized Developer ID
origin=Developer ID Application:Can anyone provide any information in reagrds to the error message "Insufficient Context" and any suggestions on what to do to solve it, please.
Post not yet marked as solved
I am trying to assist another developer with getting their application Notarized, this is their first macOS application and it's been a right old slog, quite a few things were wrong that prevented the application from being code signed in the first place.However we've got that working now; except that the Gate Keeper check, which according to the documents should be done before submitting to Apple for Notarization. Continually fails for them.source=Unnotarized Developer IDorigin=Developer ID Application: <private>When I try it on my machine using my own Developer ID certificate, it works. So it's obviously something tied specifically to their account.Can someone please tell me what this error actually means and how I go about solving it, so they can submit their application for Notarization?
Post not yet marked as solved
Background: I've been using Core Image for almost a decade, but in my latest project I have a filter which samples surrounding pixels in order to update the pixel. Currently I'm passing an image/texture to the kernel which contains the locations to sample. It's not as fast as I would like and I beleive some of that is because it has to do a read to get the location, to then go and read the pixel (I store two locations in one pixel), so two for every two pixels, it has to do 3 texture reads.I would like to avoid this extra read and simply use an array of locations, however I'm not able to figure out how to do this with Core Image. For OpenGL, you can create an array constant or a uniform array, but I couldn't get either to work in Core Image.If I were to create a custom Metal Shader, could I pass it an array of locations (vec2/float2)?
Post not yet marked as solved
Is there a way to determine the maximum size of a tile in Core Image filter?I am still getting crashes from the GPU (see the crash report listed below), which results in either a corrupted output and the app cannot quit (has to be force quit) or what I started to see yesterday were complete app crashes.Right now, I am working on a theory that I am potentially exceeding the maximum tile size, and wondered if there is a way I can quantify this?My problematic filter (by using intermediate renders & logging I am able to narrow down to exactly which filter is crashing), reads the surrounding pixels to do some "local" analysis in order to update the central pixel. I've done my best to keep the radius low, but with some images it produces unnatural results when using a small radius, especially when I test it with 50 or 100 megapixel images.I am using PDS locations to reduce the amount of pixels it needs to sample, & using an image to store those locations (as I don't know how to share an array of vec2 objects between kernels). My earlier problem was that it would only render a portion of the image, this was solved by handling the ROI, but that's when I started to experience crashes. It no longer crashes for radii of 100 or 200 pixels, but going over that (which is required for larger images) in an almost certain way to bring it down (please note that sometimes it doesn't crash).Running on the CPU, solves this, but at a cost. The CPU (on the machine I'm developing with) is limited to processing 50 megapixel images. While the GPU claims to cope with larger images, I experience these crashes. Currently, processing a 50 megapixel image on the CPU took 10 minutes to render, while the GPU now averages ~ 1/5th of the time, I simply can't trust it.GPU crash report below.Wed Feb 13 00:02:24 2019
Event: GPU Reset
Date/Time: Wed Feb 13 00:02:24 2019
Application: <appName>
Path:
OS Version: Mac OS X
Graphics Hardware: NVIDIA GeForce GT 650M
Signature: 8
Report Data:
NVDA(Graphics): Channel exception! Exception type = 0x8 DMA Engine Error (FIFO Error 8)
Channel Info: [56, 0x1e, 0x11, 0x1e51]
Version Info: [com.apple.GeForce, 10.1.0, 0x7d780b0a, 18894120, 310.42.25f02, 1]
Resource Manager Info:
4443564e 00000118 8fb28137 f8a4de8f 00000001 00000014 d3793533 46d3a4a6
4614f297 e71edccf 00088301 000000e1 12f2500a 081d0a4d 1002c197 20001810
30002800 05dc3800 4805dc40 00500392 00601e58 01080d22 808e8010 81042202
22188091 1001080d 02808a84 9c820422 0e2250a8 84100108 22028180 80a6b805
030a0880 0a00149a 1f138222 47100008 d2200118 e1b02806 50004804 60015820
78007064 01019000 0a000198 00138a03 13923d0a 24380a3a 0e000000 01000000
490000e0 0100000f 49000000 0000000b 47000000 ff000304 ff000000 ff000000
ff000000 ff000000 ff000000 0a000000 1d13c220 00100008 dc80a818 1e200bf6
03300828 b5ade038 059cbbac 48028040 00000013 4443564e
Accelerator Event History:
0a0808001a04080010010a0808001a04080210010a2b0800122708c080021080f09dd1
86f0ffffff0118a18fc08e8c87800f20b79d8080c0d80328d2bc808090020a23080012
1f08c480021080f09dd186f0ffffff0118a18fc08e8c87800f208082800828000a0808
001a04080210000a0808001a0408001000I am beyond tired at this point, incredibly frustrated and I really want to quit.Any help or suggestions you can offer, would be grately appreciated.
This is my first time using CIKernelROICallback, and while it solved the problem of only a portain of my image being processed; I am now getting what I beleive are to be GPU crashes; whereby the resulting image is corrupted and my application has to be force quit (after rendering it's still using 100% of the CPU).I have two questions.1. Can someone look at the code below and confirm that it looks correct please.// called when setting up for fragment program and also calls fragment program
- (CIImage *)outputImage
{
// --- Apple's example used a float; but I tried it as a double.
double radius = [blurRadius doubleValue];
CISampler *src = [CISampler samplerWithImage:inputImage];
CISampler *envMap = [CISampler samplerWithImage:map];
CGRect mapExtent = [map extent]; // --- Grab th amp extent.
// --- This is the callback that we cannot do in Xojo.
CIKernelROICallback callback = ^(int index, CGRect rect) {
if ( index == 0 ) {
return CGRectInset( rect, -radius, -radius );
} else {
return mapExtent;
}
return rect;
};
// --- This following line was missing from Apple's example, it allows the kernel
// to process with a radius of 200, but still crashes at 500 or 1000.
CGRect dod = CGRectInset( [inputImage extent], -radius, -radius );
return [_OCIGrouperFilterKernel applyWithExtent:dod
roiCallback:callback
arguments:@[ src,
envMap,
sampleCount,
spatialWC,
rangeWC,
clampScale]];
}2. Before the image is passed to this filter; I use [CIImage imageByClampingToExtent:] and when the image is returned from the filter I call [CIImage imageByCroppingToRect:] ( using the extent before processing). Should the DOD be the inset extent of the image before it was clampped?i.e.CGRect dod = CGRectInset( unclampedExtent, -radius, -radius );Thanks for any help you can give; this section has been particularly painful and frustrating.
Bizarre issue; When my kernel is run on the GPU and I ask it to process the full sized image (which can be 10mpx, 20mpx, 5mpx images); the resulting image only shows the bottom half being processed by the kernel.When it's done via the CPU, it does the complete image, just about 10 times slower.Any one run into this? Any suggestions on how to solve it. Obviously I want to use the GPU as it's about 10x faster than the CPU.
Post not yet marked as solved
Is there a way to read an ivec from a sampler?For example if I read a sample from a 16-Bit image, all the values return in the range of 0.0 ~ 1.0. However is there a way I could get them returned in the range of 0 ~ 65535, without having to multiply them?
Post not yet marked as solved
I'm trying to utilize subsampling to get actual pixels in an area (but not the average) and so far; I keep failing with performance.My first attempts were to create a kernel from a string and inject an array, using either "const" or "uniform" so that the array wouldn't have to be built for each pixel that gets passed to the kernel.Neither const, nor uniform seems to work for an array, and the performance was terrible, my guess is because it's rebuilding the array for each pixel.1425 x 948 image; ~200 samples per pixel, took 3 seconds.Second attempt was to create an image containing location data; after days, I had it working and while it's faster than the above method, it's still too slow. Probably because for each pixel sampled from the image, it also has to sample the map image, so 200 samples, actually incurs 400 reads from textures.1425 x 948 image; ~ 200 samples per pixel, took 0.2 of a second.So what am I missing?Is there a way to create a vec2/float2 array once, rather than for every single pixel? Is there something else you know of that I'm missing which would speed this whole thing up?A pre-emptive thank you for any tips or suggestions on this; it's been mighty painful trying to optimize this kernel.Oh, before I forget: I tried the Separable route and boy does that make it fast! 0.064, but because of the nature of a separable kernel, my results are the worst!
Post not yet marked as solved
Basically I'm trying to share dynamically generated location data (pixel locations) between kernels.Assume I have the points, 0x0, 51x51, 102x102, 153x153, 204x204 & 255x255.I have created a block of memory, stuffed the locations into the memory (so they appear as red & green channels). Wrap that data in a CGImage, and then create a CIImage from it.They don't show up in the filter as the correct locations, the closest I've gotten is to use the colorspace kCGColorSpaceGenericRGBLinear when creating the CGImage, however they're still wrong. 0x0 is correct, but in my short tests, it appears the other values are off by 23.Supposedly I can turn off color management for the entire rendering chain, but as the rest of the chain is already working and I really don't want to mess with it, I'd like to know if there is a better way to share location data.I know that I can currently create a dynamic kernel and write the locations into the kernel, however it appears that dynamic kernels are going to go away in the future, so while I'm here, I thought I'd see if there's a way I can properly do this today.
Post not yet marked as solved
Just noticed that [CIKernel kernelsWithString:] and related have all been deprecated on 10.14, but I can find no mention of what I am supposed to use instead.
Post not yet marked as solved
I have an that a reviewer keeps rejecting an application update (not a new application) using this as the reason. Each time they show a screenshot of the Application menu, where the automatic Apple added menu items ("Hide App", "Hide Other Apps" ) are in the reviewer's selected language and the items that I've added to the Application menu are in English.Over the last two days, I've written to the reviewer twice and not once received a response. I explained that our application only supports English and that the items in English are ones that I've added to the menu, while the others are ones that Apple has automatically added. I sent the reviewer a screenshot of the application running on an English system and verified that English is the only language within the product, plus I even double checked "Localization native development region" key in the plist to make sure that's set to "en" or English.I've read the App Store Review guidelines and it seems that the reviewer is either interpreting this as a bug with my application.Am I missing something? Can I force the Apple auto added menu items to appear in English instead of their selected system language? Or should I apply for an appeal?
Can kSecUseKeychain be used with SecItemCopyMatching?I'm trying to find a way to display to the user on which KeyChain some items are.
I am using AVAssetWriter to create MP4 files ( AVFileTypeMPEG4 ) and this bit works fine, however I am having trouble writing some of the metadata.These keys work fine:* AVMetadataCommonKeyAuthor* AVMetadataCommonKeyDescription* AVMetadataCommonKeyCopyrightsBut the following don't, they log an entry to the console when used (about being invalid).* AVMetadataCommonKeyModel* AVMetadataCommonKeyMake* AVMetadataCommonKeySoftwareI've also tried the QuickTime and iTunes variations, which don't log errors but don't add the meta data to the file either (and I specified the matching key spaces also).* AVMetadataQuickTimeMetadataKeyMake* AVMetadataQuickTimeMetadataKeyModel* AVMetadataQuickTimeMetadataKeyKeywords* AVMetadataQuickTimeMetadataKeySoftware* AVMetadataiTunesMetadataKeyEncodedByShould I file this as a bug or there is some magic trick to make these work?