CS563-QA: A Collection for Evaluating Question Answering Systems

07/02/2019
by   Katerina Papantoniou, et al.
0

Question Answering (QA) is a challenging topic since it requires tackling the various difficulties of natural language understanding. Since evaluation is important not only for identifying the strong and weak points of the various techniques for QA, but also for facilitating the inception of new methods and techniques, in this paper we present a collection for evaluating QA methods over free text that we have created. Although it is a small collection, it contains cases of increasing difficulty, therefore it has an educational value and it can be used for rapid evaluation of QA systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2015

QANUS: An Open-source Question-Answering Platform

In this paper, we motivate the need for a publicly available, generic so...
research
09/20/2018

A Quantitative Evaluation of Natural Language Question Interpretation for Question Answering Systems

Systematic benchmark evaluation plays an important role in the process o...
research
11/20/2020

What do we expect from Multiple-choice QA Systems?

The recent success of machine learning systems on various QA datasets co...
research
06/09/2019

Question Answering as Global Reasoning over Semantic Abstractions

We propose a novel method for exploiting the semantic structure of text ...
research
06/21/2023

CompMix: A Benchmark for Heterogeneous Question Answering

Fact-centric question answering (QA) often requires access to multiple, ...
research
10/15/2021

ContraQA: Question Answering under Contradicting Contexts

With a rise in false, inaccurate, and misleading information in propagan...
research
12/01/2018

QADiver: Interactive Framework for Diagnosing QA Models

Question answering (QA) extracting answers from text to the given questi...

Please sign up or login with your details

Forgot password? Click here to reset