Hi all,
I have a question about how to access intermediate results in GPU (not from CPU) when using MPSTemporalImage in MPS CNN.
Here is the thing, current MPS CNN provides good capability for Image classification tasks. However, I would like to develop the application on top of Apple's work.
I would like to apply my algorithm based on intermediate results in CNN, e.g., one of output of convolutional layer.
I know how to access metalBuffer and MTLTexture by Metal API; however, is there any way I can access data in MPSTemporalImage from GPU side directly? (without copy back to CPU for reducing memory transfer.)
Any suggestions?
The possible solution I found is: I need to use MPSImage for the results I would like to access (since we can not access underlying texture in MPSTemporalImage), and then create a computeCommandEncoder to setTextures to let my Metal codes to access the results in MPSImage.
Then, I have a question about does the texture accessed by MPSImage.texture is an array of MTLTexture if the featureChannel of MPSImage is larger than 4.
Or can I treat the MPSImage as a buffer? if so, what is the data arrangment? Rows -> Columns -> Channels -> numberOfImages?
Thanks.
Richard Chen