CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

by   Tianlun Zheng, et al.

The attention-based encoder-decoder framework is becoming popular in scene text recognition, largely due to its superiority in integrating recognition clues from both visual and semantic domains. However, recent studies show the two clues might be misaligned in the difficult text (e.g., with rare text shapes) and introduce constraints such as character position to alleviate the problem. Despite certain success, a content-free positional embedding hardly associates with meaningful local image regions stably. In this paper, we propose a novel module called Multi-Domain Character Distance Perception (MDCDP) to establish a visual and semantic related position encoding. MDCDP uses positional embedding to query both visual and semantic features following the attention mechanism. It naturally encodes the positional clue, which describes both visual and semantic distances among characters. We develop a novel architecture named CDistNet that stacks MDCDP several times to guide precise distance modeling. Thus, the visual-semantic alignment is well built even various difficulties presented. We apply CDistNet to two augmented datasets and six public benchmarks. The experiments demonstrate that CDistNet achieves state-of-the-art recognition accuracy. While the visualization also shows that CDistNet achieves proper attention localization in both visual and semantic domains. We will release our code upon acceptance.


page 1

page 5

page 7

page 8

page 11


Decoupling Visual-Semantic Feature Learning for Robust Scene Text Recognition

Semantic information has been proved effective in scene text recognition...

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

Linguistic knowledge has brought great benefits to scene text recognitio...

TextScanner: Reading Characters in Order for Robust Scene Text Recognition

Driven by deep learning and the large volume of data, scene text recogni...

A Glyph-driven Topology Enhancement Network for Scene Text Recognition

Attention-based methods by establishing one-dimensional (1D) and two-dim...

Adaptive Embedding Gate for Attention-Based Scene Text Recognition

Scene text recognition has attracted particular research interest becaus...

Semantic Reinforced Attention Learning for Visual Place Recognition

Large-scale visual place recognition (VPR) is inherently challenging bec...

Context Perception Parallel Decoder for Scene Text Recognition

Scene text recognition (STR) methods have struggled to attain high accur...

Please sign up or login with your details

Forgot password? Click here to reset