Can one perform model inference on the model deployment page? Or is this just a step in the deployment process to the device?
Next, does this model storage count towards any of the various storage quotas?
Does the encrypted ML model get decrypted every time it needs to perform an inference operation on device? Just curious about what's actually is happening on the user's device here.
Finally, is this model deployment process the same for macOS 11?
Next, does this model storage count towards any of the various storage quotas?
Does the encrypted ML model get decrypted every time it needs to perform an inference operation on device? Just curious about what's actually is happening on the user's device here.
Finally, is this model deployment process the same for macOS 11?