Hi friends,
I have just found that the inference speed dropped to only 1/10 of the original model.
Had anyone encountered this?
Thank you.
Hi friends,
I have just found that the inference speed dropped to only 1/10 of the original model.
Had anyone encountered this?
Thank you.
I profiled the app in 'Instruments', found that it was using the CPU to inference, so it is much slower than the original model.
Hello @wild-bee,
Please file a bug report for this issue using Feedback Assistant. It is unexpected that model encryption would affect inference time.
-- Greg
Fired the bug.
Filed the bug with 'Feedback Assistant'.