Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users

06/24/2022
by   Akhter Al Amin, et al.
0

Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled word-importance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2018

Word Embeddings from Large-Scale Greek Web content

Word embeddings are undoubtedly very useful components in many NLP tasks...
research
04/24/2019

Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings

Despite advances in open-domain dialogue systems, automatic evaluation o...
research
08/18/2023

Predictive Authoring for Brazilian Portuguese Augmentative and Alternative Communication

Individuals with complex communication needs (CCN) often rely on augment...
research
09/04/2017

Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects

This paper describes a preliminary study for producing and distributing ...
research
05/31/2019

What does a Car-ssette tape tell?

Captioning has attracted much attention in image and video understanding...
research
02/07/2017

How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks

Maybe the single most important goal of representation learning is makin...
research
10/13/2020

Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance

Recent advances in automatic evaluation metrics for text have shown that...

Please sign up or login with your details

Forgot password? Click here to reset