CoreML 6 beta 2 - Failed to create CVPixelBufferPool

Hello everyone,

I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.

I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))

The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).

When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:

Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000

It is the first time I am using it, so I don't really have so much of experience. Could you help me to understand what could be the problem?

Thanks a lot

It does look like an invalid image maybe present. Can you try 10 classes at a time (so the problem space is reduced) to see if you can identify which class/image led to the problem? Once you see the issue again, go back to the data source preview to see if you can find out the problematic image(s).

Also, I'd try not to enable any augmentation options to get a first pass result, as that increases training time.

I did share the data set via Airdrop to another mac (M2 Pro), running still Xcode 16 beta 2 (same CoreML version), but running on MacOS 14.5 (23F79), 32GB RAM,

It took longer but no issues and model trained.

Seems more something on MacOS, but I don't know how to investigate further.

So i'm running into the exact same issue. I went through the photos (~64000 or so) and nothing seems corrupt. Curious to hear if anyone else is having this issue.

It would be very useful if you could see what image failed. Either by displaying the full path, or making logs available somewhere.

Did some extra checks.

Training data, 2 classes, 16551 images (around 50KB/image), in jpeg.

The exact error is: Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000

I validated all 16551 images using "jpeginfo", all files are OK.

I did also try a mix of Augmentations to see if adding or removing any would make a difference. It did not.

Finally, I tried switching back to the V1 Feature Extractor (Image Feature Print Vn). Now it runs just fine.

So best bet is that the V2 Feature Extractor does something extra, and fails.

In conclusion: At a bare minimum, Create ML should spit out which file(s) it considers corrupt, and hopefully why.

Beyond that, nice enhancements would be the ability to ignore-corrupt-files-and-continue.

The issue is not caused by invalid Image. I debugged the MLRecipeExecutionService, it was found that the problem occurs because it is unable to allocate an IOSurface. This issue only happens when using the V2 Feature Extractor. It is resolved by trying with macOS 14.6.1 and Xcode 15.4.

CoreML 6 beta 2 - Failed to create CVPixelBufferPool
 
 
Q