How to access intermediate results in GPU when using MPSTemporalImage of MPS CNN?

Hi all,


I have a question about how to access intermediate results in GPU (not from CPU) when using MPSTemporalImage in MPS CNN.


Here is the thing, current MPS CNN provides good capability for Image classification tasks. However, I would like to develop the application on top of Apple's work.


I would like to apply my algorithm based on intermediate results in CNN, e.g., one of output of convolutional layer.

I know how to access metalBuffer and MTLTexture by Metal API; however, is there any way I can access data in MPSTemporalImage from GPU side directly? (without copy back to CPU for reducing memory transfer.)


Any suggestions?


The possible solution I found is: I need to use MPSImage for the results I would like to access (since we can not access underlying texture in MPSTemporalImage), and then create a computeCommandEncoder to setTextures to let my Metal codes to access the results in MPSImage.

Then, I have a question about does the texture accessed by MPSImage.texture is an array of MTLTexture if the featureChannel of MPSImage is larger than 4.


Or can I treat the MPSImage as a buffer? if so, what is the data arrangment? Rows -> Columns -> Channels -> numberOfImages?


Thanks.


Richard Chen

You can't. The MPSTemporaryImage lives only in GPU memory and can't map its storage back to the CPU to be read. You can substitute a regular MPSImage for it it in your computation, which can be read, but keep in mind this will negate the memory savings of the temporary image and many such may cause your job to exceed available memory either on GPU or CPU and cause problems.

Also don't forget to synchronize the MPSImage before you look at its contents. Otherwise, you are liable to see a bunch of NaNs.
How to access intermediate results in GPU when using MPSTemporalImage of MPS CNN?
 
 
Q