Unsupervised contrastive learning methods have recently seen significant...
Large-scale language models have become increasingly challenging and
exp...
Unsupervised domain adaptive person re-identification (Re-ID) methods
al...
State-of-the-art deep neural networks are trained with large amounts
(mi...
Adaptive gradient methods, such as Adam and LAMB, have demonstrated exce...
Replay-based methods have proved their effectiveness on online continual...
Human evaluations are often required for abstractive summary evaluations...
Decentralized partially observable Markov decision processes (Dec-POMDPs...
Pre-trained language models (PLMs) have accomplished impressive achievem...
As one of the most fundamental techniques in multimodal learning, cross-...
Continual learning (CL) can help pre-trained vision-language models
effi...
Data pruning aims to obtain lossless performances as training on the ori...
Dataset distillation reduces the network training cost by synthesizing s...
Various recent methods attempt to implement rotation-invariant 3D deep
l...
Dataset distillation aims to generate small datasets with little informa...
In human-robot collaboration, the objectives of the human are often unkn...
In recent years, large-scale models have demonstrated state-of-the-art
p...
Foundation models have impressive performance and generalization capabil...
New architecture GPUs like A100 are now equipped with multi-instance GPU...
Learning to predict masked tokens in a sequence has been shown to be a
p...
Diagram object detection is the key basis of practical applications such...
In recent years, the number of parameters of one deep learning (DL) mode...
Object pose estimation is an important topic in 3D vision. Though most
c...
This paper presents a general one-shot object localization algorithm cal...
A wide range of control perspectives have been explored in controllable ...
Large transformer models display promising performance on a wide range o...
Though vision transformers (ViTs) have exhibited impressive ability for
...
Deep learning recommendation models (DLRMs) have been widely applied in
...
The success of today's AI applications requires not only model training
...
Domain Adaptation of Black-box Predictors (DABP) aims to learn a model o...
Face recognition, as one of the most successful applications in artifici...
Learning with noisy labels has aroused much research interest since data...
In this paper, we tackle the problem of category-level 9D pose estimatio...
Recently, Sharpness-Aware Minimization (SAM), which connects the geometr...
Dataset condensation aims at reducing the network training effort throug...
Protein structure prediction is an important method for understanding ge...
Recent self-supervised contrastive learning methods greatly benefit from...
Pixel-level 2D object semantic understanding is an important topic in
co...
The Transformer architecture has improved the performance of deep learni...
When directly using existing text generation datasets for controllable
g...
This paper looks at solving collaborative planning problems formalized a...
The pre-trained model (PTM) is revolutionizing Artificial intelligence (...
Efficient GPU resource scheduling is essential to maximize resource
util...
Large-batch training has become a commonly used technique when training
...
Data parallelism does a good job in speeding up the training. However, w...
The recent Natural Language Processing techniques have been refreshing t...
Face recognition has achieved significant progress in deep-learning era ...
Huge neural network models have shown unprecedented performance in real-...
Detecting aligned 3D keypoints is essential under many scenarios such as...
Point cloud analysis without pose priors is very challenging in real
app...