InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction

by   Xiao Wang, et al.

Large language models have unlocked strong multi-task capabilities from reading instructive prompts. However, recent studies have shown that existing large models still have difficulty with information extraction tasks. For example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, which is significantly lower than the state-of-the-art performance. In this paper, we propose InstructUIE, a unified information extraction framework based on instruction tuning, which can uniformly model various information extraction tasks and capture the inter-task dependency. To validate the proposed method, we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction datasets in a unified text-to-text format with expert-written instructions. Experimental results demonstrate that our method achieves comparable performance to Bert in supervised settings and significantly outperforms the state-of-the-art and gpt3.5 in zero-shot settings.


MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning

Instruction tuning, a new learning paradigm that fine-tunes pre-trained ...

Task-aware Retrieval with Instructions

We study the problem of retrieval with instructions, where users of a re...

UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction Tuning

Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspe...

Universal Information Extraction as Unified Semantic Matching

The challenge of information extraction (IE) lies in the diversity of la...

STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation

Collaborative stories, which are texts created through the collaborative...

PIVOINE: Instruction Tuning for Open-world Information Extraction

We consider the problem of Open-world Information Extraction (Open-world...

Schema-Driven Information Extraction from Heterogeneous Tables

In this paper, we explore the question of whether language models (LLMs)...

Please sign up or login with your details

Forgot password? Click here to reset