Towards using Few-Shot Prompt Learning for Automating Model Completion

12/07/2022
by   Meriem Ben Chaaben, et al.
0

We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.

READ FULL TEXT
research
10/19/2022

TabLLM: Few-shot Classification of Tabular Data with Large Language Models

We study the application of large language models to zero-shot and few-s...
research
08/18/2023

Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases

Large Language Models (LLMs) have demonstrated remarkable performance in...
research
07/07/2022

Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation

Large pretrained language models (PLMs) are often domain- or task-adapte...
research
01/01/2022

Cross-Domain Deep Code Search with Few-Shot Meta Learning

Recently, pre-trained programming language models such as CodeBERT have ...
research
08/12/2023

Three Ways of Using Large Language Models to Evaluate Chat

This paper describes the systems submitted by team6 for ChatEval, the DS...
research
05/21/2022

All You Need Is Logs: Improving Code Completion by Learning from Anonymous IDE Usage Logs

Integrated Development Environments (IDE) are designed to make users mor...
research
05/08/2022

Context-Aware Abbreviation Expansion Using Large Language Models

Motivated by the need for accelerating text entry in augmentative and al...

Please sign up or login with your details

Forgot password? Click here to reset