Reading and Writing: Discriminative and Generative Modeling for Self-Supervised Text Recognition

07/01/2022
by   Mingkun Yang, et al.
0

Existing text recognition methods usually need large-scale training data. Most of them rely on synthetic training data due to the lack of annotated real images. However, there is a domain gap between the synthetic data and real data, which limits the performance of the text recognition models. Recent self-supervised text recognition methods attempted to utilize unlabeled real images by introducing contrastive learning, which mainly learns the discrimination of the text images. Inspired by the observation that humans learn to recognize the texts through both reading and writing, we propose to learn discrimination and generation by integrating contrastive learning and masked image modeling in our self-supervised method. The contrastive learning branch is adopted to learn the discrimination of text images, which imitates the reading behavior of humans. Meanwhile, masked image modeling is firstly introduced for text recognition to learn the context generation of the text images, which is similar to the writing behavior. The experimental results show that our method outperforms previous self-supervised text recognition methods by 10.2 proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3 demonstrate that our pre-trained model can be easily applied to other text-related tasks with obvious performance gain.

READ FULL TEXT

page 6

page 8

research
04/20/2022

Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning

Though offering amazing contextualized token-level representations, curr...
research
11/01/2022

Self-supervised Character-to-Character Distillation

Handling complicated text images (e.g., irregular structures, low resolu...
research
06/03/2021

TVDIM: Enhancing Image Self-Supervised Pretraining via Noisy Text Data

Among ubiquitous multimodal data in the real world, text is the modality...
research
06/01/2023

StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners

We investigate the potential of learning visual representations using sy...
research
08/17/2021

Self-Supervised Pretraining and Controlled Augmentation Improve Rare Wildlife Recognition in UAV Images

Automated animal censuses with aerial imagery are a vital ingredient tow...
research
03/09/2022

Text-DIAE: Degradation Invariant Autoencoders for Text Recognition and Document Enhancement

In this work, we propose Text-Degradation Invariant Auto Encoder (Text-D...
research
06/20/2022

Great Expectations: Unsupervised Inference of Suspense, Surprise and Salience in Storytelling

Stories interest us not because they are a sequence of mundane and predi...

Please sign up or login with your details

Forgot password? Click here to reset