Measuring Improvement of F_1-Scores in Detection of Self-Admitted Technical Debt

03/16/2023
by   William Aiken, et al.
0

Artificial Intelligence and Machine Learning have witnessed rapid, significant improvements in Natural Language Processing (NLP) tasks. Utilizing Deep Learning, researchers have taken advantage of repository comments in Software Engineering to produce accurate methods for detecting Self-Admitted Technical Debt (SATD) from 20 open-source Java projects' code. In this work, we improve SATD detection with a novel approach that leverages the Bidirectional Encoder Representations from Transformers (BERT) architecture. For comparison, we re-evaluated previous deep learning methods and applied stratified 10-fold cross-validation to report reliable F_1-scores. We examine our model in both cross-project and intra-project contexts. For each context, we use re-sampling and duplication as augmentation strategies to account for data imbalance. We find that our trained BERT model improves over the best performance of all previous methods in 19 of the 20 projects in cross-project scenarios. However, the data augmentation techniques were not sufficient to overcome the lack of data present in the intra-project scenarios, and existing methods still perform better. Future research will look into ways to diversify SATD datasets in order to maximize the latent power in large BERT models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2021

Data Balancing Improves Self-Admitted Technical Debt Detection

A high imbalance exists between technical debt and non-technical debt so...
research
03/24/2023

PENTACET data – 23 Million Contextual Code Comments and 500,000 SATD comments

Most Self-Admitted Technical Debt (SATD) research utilizes explicit SATD...
research
03/02/2023

Pathways to Leverage Transcompiler based Data Augmentation for Cross-Language Clone Detection

Software clones are often introduced when developers reuse code fragment...
research
02/08/2021

Traceability Transformed: Generating more Accurate Links with Pre-Trained BERT Models

Software traceability establishes and leverages associations between div...
research
08/09/2021

Towards artificially intelligent recycling Improving image processing for waste classification

The ever-increasing amount of global refuse is overwhelming the waste an...
research
06/02/2022

Learning code summarization from a small and local dataset

Foundation models (e.g., CodeBERT, GraphCodeBERT, CodeT5) work well for ...
research
01/27/2023

Down the Rabbit Hole: Detecting Online Extremism, Radicalisation, and Politicised Hate Speech

Social media is a modern person's digital voice to project and engage wi...

Please sign up or login with your details

Forgot password? Click here to reset