Code Representation Pre-training with Complements from Program Executions

09/04/2023
by   Jiabo Huang, et al.
0

Large language models (LLMs) for natural language processing have been grafted onto programming language modeling for advancing code intelligence. Although it can be represented in the text format, code is syntactically more rigorous in order to be properly compiled or interpreted to perform a desired set of behaviors given any inputs. In this case, existing works benefit from syntactic representations to learn from code less ambiguously in the forms of abstract syntax tree, control-flow graph, etc. However, programs with the same purpose can be implemented in various ways showing different syntactic representations while the ones with similar implementations can have distinct behaviors. Though trivially demonstrated during executions, such semantics about functionality are challenging to be learned directly from code, especially in an unsupervised manner. Hence, in this paper, we propose FuzzPretrain to explore the dynamic information of programs revealed by their test cases and embed it into the feature representations of code as complements. The test cases are obtained with the assistance of a customized fuzzer and are only required during pre-training. FuzzPretrain yielded more than 6 only source code or AST, respectively. Our extensive experimental results show the benefits of learning discriminative code representations with program executions.

READ FULL TEXT
research
10/26/2022

Benchmarking Language Models for Code Syntax Understanding

Pre-trained language models have demonstrated impressive performance in ...
research
05/10/2021

How could Neural Networks understand Programs?

Semantic understanding of programs is a fundamental problem for programm...
research
05/23/2023

Understanding Programs by Exploiting (Fuzzing) Test Cases

Semantic understanding of programs has attracted great attention in the ...
research
06/19/2018

Neural Code Comprehension: A Learnable Representation of Code Semantics

With the recent success of embeddings in natural language processing, re...
research
08/07/2023

Symmetry-Preserving Program Representations for Learning Code Semantics

Large Language Models (LLMs) have shown promise in automated program rea...
research
06/14/2022

Exploring Representation of Horn Clauses using GNNs (technique report)

Learning program semantics from raw source code is challenging due to th...
research
06/11/2023

Augmenting Greybox Fuzzing with Generative AI

Real-world programs expecting structured inputs often has a format-parsi...

Please sign up or login with your details

Forgot password? Click here to reset