Do Users Write More Insecure Code with AI Assistants?

11/07/2022
by   Neil Perry, et al.
0

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

READ FULL TEXT

page 5

page 6

page 9

page 12

research
08/12/2023

Copilot Security: A User Study

Code generation tools driven by artificial intelligence have recently be...
research
05/31/2023

AI for Low-Code for AI

Low-code programming allows citizen developers to create programs with m...
research
06/30/2022

Grounded Copilot: How Programmers Interact with Code-Generating Models

Powered by recent advances in code-generating models, AI assistants like...
research
05/11/2023

Taking Advice from ChatGPT

A growing literature studies how humans incorporate advice from algorith...
research
10/25/2022

Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming

AI code-recommendation systems (CodeRec), such as Copilot, can assist pr...
research
10/18/2020

The Convergence of AI code and Cortical Functioning – a Commentary

Neural nets, one of the oldest architectures for AI programming, are loo...
research
03/06/2023

Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting

We propose a conceptual perspective on prompts for Large Language Models...

Please sign up or login with your details

Forgot password? Click here to reset