When to use vImage, Metal Performance Shaders, or Core Image?

I've looked in multiple places online, including here in the forums where a somewhat similar question is asked (and never answered :( ) but i'm going to ask anyway:

vImage, Metal Performance Shaders, and Core Image all have a big overlap in the kinds of operations they perform on image data. But none of supporting materials (documentation, WWDC session videos, help) ever seem to bother with paying much heed to even the existence of the others when talking about themselves.

For example, Core Image talks about how efficient and fast it is. MPS talks about everything being "hand rolled" to be optimized for the hardware its running on. Which means yes, fast and efficient. and vImage talks about being fast and..yup, energy-saving.

But I and other have very little to go on as to when vImage makes sense over MPS. Or Core Image. If I have a large set of images and I want to get the mean color value of each image and i want to equalize or adjust the histogram of each, or perform some other color operation on each in the set, for example, which is best?

I hope someone from Apple -- preferably multiple people from the multiple teams that work on these multiple technologies -- can help clear some of this up?

Replies

I would love to learn the same, from what I gathered over the years (but can not confirm officially), vImage uses CPU vector instructions, Metal shaders go to GPU, and Core Image depdends on what kernel it will use. Core Image can be made to work with custom Metal kernel, which means it will go to GPU.

enodev is pretty much spot on here, but to address your specific question:

If I have a large set of images and I want to get the mean color value of each image and i want to equalize or adjust the histogram of each, or perform some other color operation on each in the set, for example, which is best?

Core Image. Probably (-:

As a general rule it’s best to use the highest-level API that meets your needs. In this case Core Image is the highest-level API of the set you mentioned and, while I’m hardly a graphics expert, I believe it can meet your requirements.

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

As a general rule it’s best to use the highest-level API that meets your needs. In this case Core Image is the highest-level API of the set you mentioned and, while I’m hardly a graphics expert, I believe it can meet your requirements.

Right, I'm just not entirely sure what that is in this particular case, which is processing (getting mean, variance, histogram, etc) perhaps thousands of images in certain situations. The closest that Apple ever seems to get with this kind of thing is "CIContext is expensive; reuse it instead of tearing it down and recreating it every time if you have multiple images."

So I don't really know if I need to use MPS to better take advantage of systems with lots of GPU resources, if CIContext/CoreImage will do that just fine. I know each route fairly well -- Core Image in particular -- it's just that nearly all my experience is in the oligo- case and not the mass-production case.