Exploring Data Augmentation for Code Generation Tasks

02/05/2023
by   Pinzhen Chen, et al.
0

Advances in natural language processing, such as transfer learning from pre-trained language models, have impacted how models are trained for programming language tasks too. Previous research primarily explored code pre-training and expanded it through multi-modality and multi-tasking, yet the data for downstream tasks remain modest in size. Focusing on data utilization for downstream tasks, we propose and adapt augmentation methods that yield consistent improvements in code translation and summarization by up to 6.9 7.5 and show benefits in output code style and numeric consistency. We also discuss test data imperfections.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset