Are there any recommendations for composing Operations into larger Operations? A couple of example use cases follow:
Let's say my app has an operation queue used for sync tasks. I have an Operation subclass for fetching all users, but that's a paged API, so it defines another Operation subclass for fetching one page of users and upserting them into the local database. To date, I've been adding the "fetch all users" operation to a global "sync" OperationQueue in my app, and then creating a new operation queue within that operation (as a property of the Operation subclass) to which I add each of the page operations.
As a more complicated example, let's say I have an image processing pipeline for running images across an ML model. The ML inference is the bottleneck of the pipeline so I want to make sure that the ML engine (CoreML or Tensorflow or whatever) is always busy. This means that any pre-processing steps like loading the image, resizing and cropping it, and preparing it to send to the model should be in a separate Operation subclass from the actual inference (or each of those pre-processing steps could be in their own Operations).
This approach, however, raises some issues in that if the pre-processing steps are faster than the inference step, I end up with way too many pre-processed images in memory waiting to be run through inference.
(Perhaps this use case isn't an ideal use for Operations? 🤷♂️)
Let's say my app has an operation queue used for sync tasks. I have an Operation subclass for fetching all users, but that's a paged API, so it defines another Operation subclass for fetching one page of users and upserting them into the local database. To date, I've been adding the "fetch all users" operation to a global "sync" OperationQueue in my app, and then creating a new operation queue within that operation (as a property of the Operation subclass) to which I add each of the page operations.
As a more complicated example, let's say I have an image processing pipeline for running images across an ML model. The ML inference is the bottleneck of the pipeline so I want to make sure that the ML engine (CoreML or Tensorflow or whatever) is always busy. This means that any pre-processing steps like loading the image, resizing and cropping it, and preparing it to send to the model should be in a separate Operation subclass from the actual inference (or each of those pre-processing steps could be in their own Operations).
This approach, however, raises some issues in that if the pre-processing steps are faster than the inference step, I end up with way too many pre-processed images in memory waiting to be run through inference.
(Perhaps this use case isn't an ideal use for Operations? 🤷♂️)