Büyük dil modellerinin Türkçe verisetleri ile eğitilmesi ve ince ayarlanması

06/06/2023
by   A. Taha Arslan, et al.
0

Large language models have advanced enormously, gained vast attraction and are having a phase of intensed research. Some of the developed models and training datasets have been made open-accessible. Hence these may be further fine-tuned with some techniques to obtain specialized models for specific tasks. When it comes to Turkish language, open-access models do not provide satisfactory coverage. This is also observed over published datasets. In this work, we propose some ideas to mitigate this issue: creating large Turkish datasets, training LLMs with these and fine-tuning pre-trained models with Turkish inputs. We report our findings on Turkish-based trainings with the problems encountered along the way. We conclude with outcomes of these experiments and propose ideas for further works. – Büyük dil modelleri inanılmaz ölçüde gelişmekte, büyük ilgi toplayarak ve üzerlerinde yoğun araştirmalarin yapildiği bir dönemdedirler. Geliştirilen modeller ve eğitimde kullanilan verisetlerinden bazilari açik erişimli olarak sunulmaktadir. Böylece ince ayarlama teknikleri uygulayarak özelleşmiş görevler için çalişabilir modeller elde edilmektedir. Türkçe söz konusu olduğunda bu modellerinin kapsayiciliği yeterli düzeyde değildir. Bu durum, yayimlanan verisetlerinde de gözlemlenebilir. Bunu aşmanin yollari Türkçe içerikli büyük verisetlerinin oluşturulmasi, büyük dil modellerinin bunlarla eğitilmesi ve önceden eğitilmiş modellerin Türkçe girdilerle ince ayarlanmalari olabilir. Bu çalişmada açik erişimli dil modelleri ve verisetleri üzerinde durulmakta ve Türkçe temelli bazi deneyler, karşilaşilan sorunlar ve sonuçlar irdelenmektedir.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2023

Tutorials on Stance Detection using Pre-trained Language Models: Fine-tuning BERT and Prompting Large Language Models

This paper presents two self-contained tutorials on stance detection in ...
research
04/14/2023

MedAlpaca – An Open-Source Collection of Medical Conversational AI Models and Training Data

As large language models (LLMs) like OpenAI's GPT series continue to mak...
research
08/06/2020

Better Fine-Tuning by Reducing Representational Collapse

Although widely adopted, existing approaches for fine-tuning pre-trained...
research
05/08/2023

Prompted LLMs as Chatbot Modules for Long Open-domain Conversation

In this paper, we propose MPC (Modular Prompted Chatbot), a new approach...
research
09/26/2022

Towards Fine-Dining Recipe Generation with Generative Pre-trained Transformers

Food is essential to human survival. So much so that we have developed d...
research
08/14/2023

Platypus: Quick, Cheap, and Powerful Refinement of LLMs

We present Platypus, a family of fine-tuned and merged Large Language Mo...
research
06/24/2021

Quantization Aware Training, ERNIE and Kurtosis Regularizer: a short empirical study

Pre-trained language models like Ernie or Bert are currently used in man...

Please sign up or login with your details

Forgot password? Click here to reset