DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction

04/21/2023
by   Mohammadreza Pourreza, et al.
0

We study the problem of decomposing a complex text-to-sql task into smaller sub-tasks and how such a decomposition can significantly improve the performance of Large Language Models (LLMs) in the reasoning process. There is currently a significant gap between the performance of fine-tuned models and prompting approaches using LLMs on challenging text-to-sql datasets such as Spider. We show that SQL queries, despite their declarative structure, can be broken down into sub-problems and the solutions of those sub-problems can be fed into LLMs to significantly improve their performance. Our experiments with three LLMs show that this approach consistently improves their performance by roughly 10 beating large fine-tuned models on the holdout Spider dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset