Moto: Enhancing Embedding with Multiple Joint Factors for Chinese Text Classification

by   Xunzhu Tang, et al.

Recently, language representation techniques have achieved great performances in text classification. However, most existing representation models are specifically designed for English materials, which may fail in Chinese because of the huge difference between these two languages. Actually, few existing methods for Chinese text classification process texts at a single level. However, as a special kind of hieroglyphics, radicals of Chinese characters are good semantic carriers. In addition, Pinyin codes carry the semantic of tones, and Wubi reflects the stroke structure information, etc. Unfortunately, previous researches neglected to find an effective way to distill the useful parts of these four factors and to fuse them. In our works, we propose a novel model called Moto: Enhancing Embedding with Multiple Joint Factors. Specifically, we design an attention mechanism to distill the useful parts by fusing the four-level information above more effectively. We conduct extensive experiments on four popular tasks. The empirical results show that our Moto achieves SOTA 0.8316 (F_1-score, 2.11% improvement) on Chinese news titles, 96.38 (1.24% improvement) on Fudan Corpus and 0.9633 (3.26% improvement) on THUCNews.


geoGAT: Graph Model Based on Attention Mechanism for Geographic Text Classification

In the area of geographic information processing. There are few research...

Character-level Convolutional Network for Text Classification Applied to Chinese Corpus

This article provides an interesting exploration of character-level conv...

Which Encoding is the Best for Text Classification in Chinese, English, Japanese and Korean?

This article offers an empirical study on the different ways of encoding...

Detect Camouflaged Spam Content via StoneSkipping: Graph and Text Joint Embedding for Chinese Character Variation Representation

The task of Chinese text spam detection is very challenging due to both ...

Adaptive Region Embedding for Text Classification

Deep learning models such as convolutional neural networks and recurrent...

SikuGPT: A Generative Pre-trained Model for Intelligent Information Processing of Ancient Texts from the Perspective of Digital Humanities

The rapid advance in artificial intelligence technology has facilitated ...

GLS-CSC: A Simple but Effective Strategy to Mitigate Chinese STM Models' Over-Reliance on Superficial Clue

Pre-trained models have achieved success in Chinese Short Text Matching ...

Please sign up or login with your details

Forgot password? Click here to reset