Posts

Post marked as solved
1 Replies
279 Views
Got the following error when added the --encrypt flag to the build phase for my .coreml model file. coremlc: error: generate command model encryption is not supported on the specific deployment target macos. Any insights would be appreciated. Thanks.
Posted
by Brianyan.
Last updated
.
Post not yet marked as solved
2 Replies
534 Views
Is there anyway we can set the number of threads used during coreML inference? My model is relatively small and the overhead of launching new threads is too expensive. When using TensorFlow C API, forcing to single thread results in significant decrease in CPU usage. (So far coreML with multiple threads has 3 times the cpu usage compares to TensorFlow with single thread). Also, wondering if anyone has compared the performance between TensorFlow in C and coreML?
Posted
by Brianyan.
Last updated
.