SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text Classification

03/04/2020
by   Changzeng Fu, et al.
0

Text categorization is the task of assigning labels to documents written in a natural language, and it has numerous real-world applications including sentiment analysis as well as traditional topic assignment tasks. In this paper, we propose 5 different configurations for the semantic matrix-based memory neural network with end-to-end learning manner and evaluate our proposed method on two corpora of news articles (AG news, Sogou news). The best performance of our proposed method outperforms the baseline VDCNN models on the text classification task and gives a faster speed for learning semantics. Moreover, we also evaluate our model on small scale datasets. The results show that our proposed method can still achieve better results in comparison to VDCNN on the small scale dataset. This paper is to appear in the Proceedings of the 2020 IEEE 14th International Conference on Semantic Computing (ICSC 2020), San Diego, California, 2020.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2017

A WL-SPPIM Semantic Model for Document Classification

In this paper, we explore SPPIM-based text classification method, and th...
research
05/31/2020

Improve Document Embedding for Text Categorization Through Deep Siamese Neural Network

Due to the increasing amount of data on the internet, finding a highly-i...
research
07/07/2021

Hierarchical Text Classification of Urdu News using Deep Neural Network

Digital text is increasing day by day on the internet. It is very challe...
research
01/24/2022

Classification Of Fake News Headline Based On Neural Networks

Over the last few years, Text classification is one of the fundamental t...
research
03/22/2023

Analyzing the Generalizability of Deep Contextualized Language Representations For Text Classification

This study evaluates the robustness of two state-of-the-art deep context...
research
05/22/2022

All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass

Multi-Task Learning (MTL) models have shown their robustness, effectiven...
research
10/01/2020

Assessing Robustness of Text Classification through Maximal Safe Radius Computation

Neural network NLP models are vulnerable to small modifications of the i...

Please sign up or login with your details

Forgot password? Click here to reset