CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models

08/28/2023
by   Zhensu Sun, et al.
0

Code datasets are of immense value for training neural-network-based code completion models, where companies or organizations have made substantial investments to establish and process these datasets. Unluckily, these datasets, either built for proprietary or public usage, face the high risk of unauthorized exploits, resulting from data leakages, license violations, etc. Even worse, the “black-box” nature of neural models sets a high barrier for externals to audit their training datasets, which further connives these unauthorized usages. Currently, watermarking methods have been proposed to prohibit inappropriate usage of image and natural language datasets. However, due to domain specificity, they are not directly applicable to code datasets, leaving the copyright protection of this emerging and important field of code data still exposed to threats. To fill this gap, we propose a method, named CodeMark, to embed user-defined imperceptible watermarks into code datasets to trace their usage in training neural code completion models. CodeMark is based on adaptive semantic-preserving transformations, which preserve the exact functionality of the code data and keep the changes covert against rule-breakers. We implement CodeMark in a toolkit and conduct an extensive evaluation of code completion models. CodeMark is validated to fulfill all desired properties of practical watermarks, including harmlessness to model accuracy, verifiability, robustness, and imperceptibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2020

Towards Full-line Code Completion with Neural Language Models

A code completion system suggests future code elements to developers giv...
research
08/18/2023

Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases

Large Language Models (LLMs) have demonstrated remarkable performance in...
research
12/06/2020

NaturalCC: A Toolkit to Naturalize the Source Code Corpus

We present NaturalCC, an efficient and extensible toolkit to bridge the ...
research
11/09/2020

Learning Autocompletion from Real-World Datasets

Code completion is a popular software development tool integrated into a...
research
02/14/2022

On the Importance of Building High-quality Training Datasets for Neural Code Search

The performance of neural code search is significantly influenced by the...
research
01/10/2023

Practitioners' Expectations on Code Completion

Code completion has become a common practice for programmers during thei...
research
04/14/2020

Code Completion using Neural Attention and Byte Pair Encoding

In this paper, we aim to do code completion based on implementing a Neur...

Please sign up or login with your details

Forgot password? Click here to reset