Modern systems for multi-hop question answering (QA) typically break
que...
In-context learning has shown great success in i.i.d semantic parsing sp...
Standard practice in pretraining multimodal models, such as vision-langu...
While recent work has convincingly showed that sequence-to-sequence mode...
While interest in models that generalize at test time to new composition...
Most available semantic parsing datasets, comprising of pairs of natural...
Understanding the relationship between figures and text is key to scient...
Answering questions that involve multi-step reasoning requires decomposi...
Neural module networks (NMNs) are a popular approach for modeling
compos...
Standard test sets for supervised learning evaluate in-distribution
gene...
State-of-the-art semantic parsers rely on auto-regressive decoding, emit...
The sequence-to-sequence paradigm employed by neural text-to-SQL models
...
Research on parsing language to SQL has largely ignored the structure of...
Training agents to communicate with one another given task-based supervi...
Generative Adversarial Networks (GANs) have shown great promise recently...