Reinforcement learning allows machines to learn from their own experienc...
Adversarial reprogramming allows stealing computational resources by
rep...
Sponge examples are test-time inputs carefully-optimized to increase ene...
Adversarial patches are optimized contiguous pixel blocks in an input im...
Adversarial reprogramming allows repurposing a machine-learning model to...
AI has provided us with the ability to automate tasks, extract informati...
Evaluating robustness of machine-learning models to adversarial examples...
Backdoor attacks inject poisoning samples during training, with the goal...
Defending machine learning models from adversarial attacks is still a
ch...
One of the most concerning threats for modern AI systems is data poisoni...
Adversarial attacks on machine learning-based classifiers, along with de...
Machine-learning algorithms trained on features extracted from static co...
We present secml, an open-source Python library for secure and explainab...
Despite the impressive performances reported by deep neural networks in
...
Transferability captures the ability of an attack against a machine-lear...
Machine-learning methods have already been exploited as useful tools for...
In several applications, input samples are more naturally represented in...
Deep neural networks have been widely adopted in recent years, exhibitin...