research
          
      
      ∙
      05/28/2022
    Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Deep neural networks for image classification are well-known to be vulne...
          
            research
          
      
      ∙
      02/09/2022
    False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger
In this brief, we show that sequentially learning new information presen...
          
            research
          
      
      ∙
      05/28/2021
    Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness
Most studies on learning from noisy labels rely on unrealistic models of...
          
            research
          
      
      ∙
      02/16/2021
    Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models
Continual (or "incremental") learning approaches are employed when addit...
          
            research
          
      
      ∙
      02/11/2021
    OpinionRank: Extracting Ground Truth Labels from Unreliable Expert Opinions with Graph-Based Spectral Ranking
As larger and more comprehensive datasets become standard in contemporar...
          
            research
          
      
      ∙
      11/26/2020
    Comparative Analysis of Extreme Verification Latency Learning Algorithms
One of the more challenging real-world problems in computational intelli...
          
            research
          
      
      ∙
      02/17/2020
    Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks
Artificial neural networks are well-known to be susceptible to catastrop...
          
            research
          
      
      ∙
      02/20/2018
     
             
  
  
     
                             share
 share