NLP EMNLP

Learning Cross-Task Dependencies for Joint Extraction of Entities, Events, Event Arguments, and Relations

October 17, 2022
                                                            @inproceedings{nguyen-etal-2022-learning,
title = “Learning Cross-Task Dependencies for Joint Extraction of Entities, Events, Event Arguments, and Relations”,
author = “Nguyen, Minh Van and
Min, Bonan and
Dernoncourt, Franck and
Nguyen, Thien”,
booktitle = “Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing”,
month = dec,
year = “2022”,
address = “Abu Dhabi, United Arab Emirates”,
publisher = “Association for Computational Linguistics”,
url = “https://aclanthology.org/2022.emnlp-main.634”,
pages = “9349–9360”,
abstract = “Extracting entities, events, event arguments, and relations (i.e., task instances) from text represents four main challenging tasks in information extraction (IE), which have been solved jointly (JointIE) to boost the overall performance for IE. As such, previous work often leverages two types of dependencies between the tasks, i.e., cross-instance and cross-type dependencies representing relatedness between task instances and correlations between information types of the tasks. However, the cross-task dependencies in prior work are not optimal as they are only designed manually according to some task heuristics. To address this issue, we propose a novel model for JointIE that aims to learn cross-task dependencies from data. In particular, we treat each task instance as a node in a dependency graph where edges between the instances are inferred through information from different layers of a pretrained language model (e.g., BERT). Furthermore, we utilize the Chow-Liu algorithm to learn a dependency tree between information types for JointIE by seeking to approximate the joint distribution of the types from data. Finally, the Chow-Liu dependency tree is used to generate cross-type patterns, serving as anchor knowledge to guide the learning of representations and dependencies between instances for JointIE. Experimental results show that our proposed model significantly outperforms strong JointIE baselines over four datasets with different languages.”,
}                                                            
Back to research

Overall

7 minutes

Minh Van Nguyen, Bonan Min, Franck Dernoncourt and Thien Huu Nguyen

EMNLP 2022

Share Article

Related publications

NLP Findings of ACL
May 22, 2023

Nguyen Van Chien, Linh Van Ngo, Nguyen Huu Thien

NLP InterSpeech Top Tier
May 22, 2023

Linh The Nguyen, Thinh Pham, Dat Quoc Nguyen

NLP EMNLP Findings
October 17, 2022

Vinh Tong, Dat Quoc Nguyen, Trung Thanh Huynh, Tam Thanh Nguyen, Quoc Viet Hung Nguyen and Mathias Niepert

NLP EMNLP Findings
October 17, 2022

Viet Dac Lai*, Hieu Man*, Linh Ngo, Franck Dernoncourt and Thien Huu Nguyen