Current state-of-the-art self-supervised approaches, are effective when
...
Large language models (LLMs) released for public use incorporate guardra...
Every major technical invention resurfaces the dual-use dilemma – the ne...
We propose Automatic Feature Explanation using Contrasting Concepts (FAL...
Image-text contrastive models such as CLIP are useful for a variety of
d...
Representations learned by pre-training a neural network on a large data...
We observe that the mapping between an image's representation in one mod...
Few-shot classification (FSC) entails learning novel classes given only ...
The literature on provable robustness in machine learning has primarily
...
Large-scale training of modern deep learning models heavily relies on
pu...
In data poisoning attacks, an adversary tries to change a model's predic...
The ability to remove features from the input of machine learning models...
Though the background is an important signal for image classification, o...
Training convolutional neural networks (CNNs) with a strict 1-Lipschitz
...
Several existing works study either adversarial or natural distributiona...
Many applications of reinforcement learning can be formalized as
goal-co...
Communication is important in many multi-agent reinforcement learning (M...
With the growth of machine learning for structured data, the need for
re...
Deep neural networks can be unreliable in the real world especially when...
In recent years, researchers have extensively studied adversarial robust...
Self-supervised learning methods have shown impressive results in downst...
Certified robustness in machine learning has primarily focused on advers...
While datasets with single-label supervision have propelled rapid advanc...
Adversarial training (AT) is considered to be one of the most reliable
d...
Recent studies have shown that robustness to adversarial attacks can be
...
Object detection plays a key role in many security-critical systems.
Adv...
Saliency methods have been widely used to highlight important input feat...
Existing meta-learners primarily focus on improving the average task acc...
A key reason for the lack of reliability of deep neural networks in the ...
Standard training datasets for deep learning often contain objects in co...
Adversarial robustness of deep models is pivotal in ensuring safe deploy...
Training convolutional neural networks (CNNs) with a strict Lipschitz
co...
The study of provable adversarial robustness for deep neural network (DN...
Training convolutional neural networks with a Lipschitz constraint under...
A broad class of unsupervised deep learning methods such as Generative
A...
Randomized smoothing is a general technique for computing sample-depende...
Adversarial training is one of the most effective defenses against
adver...
Saliency methods are used extensively to highlight the importance of inp...
Randomized smoothing is a popular way of providing robustness guarantees...
Optimal Transport (OT) distances such as Wasserstein have been used in
s...
The lottery ticket hypothesis suggests that sparse, sub-networks of a gi...
Building on the success of deep learning, Generative Adversarial Network...
Randomized smoothing has been shown to provide good certified-robustness...
Adversarial training is a popular defense strategy against attack threat...
Adversarial poisoning attacks distort training data in order to corrupt ...
Influence functions approximate the effect of training samples in test-t...
We present adversarial attacks and defenses for the perceptual adversari...
Deep neural networks are being increasingly used in real world applicati...
A robustness certificate is the minimum distance of a given input to the...
GANs for time series data often use sliding windows or self-attention to...