i guys
Just to share a concern that i personally realise an hour ago through the following article "www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html". I have to say i was surprised how easy it is force classification models to make mistakes.
In iOS 12 there is the option to have classification with most of the model available on the operating system, which makes the task of creating fake images to produce wrong classifications easier. Even if Apple is having this in mind in the trainning process to minimize this type of hack, we should all be aware of this when design apps based on deep learning in general and CNN in particular.
cheers
Manuel
Interesting article.
But, in your conclusion, why specifically care about CNN (you mean convolutional neural nets I suppose) ?
I see it more as a general robsutness issue.
- for image, the training should be done with additional noised images, to increase robustness
- for numeric data sets idem