Lookup NU author(s): Dr Noura Al Moubayed,
Dr Steven Bradley,
Dr Stephen McGough
This is the authors' accepted manuscript of a conference proceedings (inc. abstract) that has been published in its final definitive form by IEEE, 2018.
For re-use rights please refer to the publisher's terms and conditions.
Natural Language Inference (NLI) is a fundamen- tal step towards natural language understanding. The task aims to detect whether a premise entails or contradicts a given hypothesis. NLI contributes to a wide range of natural lan- guage understanding applications such as question answering, text summarization and information extraction. Recently, the public availability of big datasets such as Stanford Natural Language Inference (SNLI) and SciTail, has made it feasible to train complex neural NLI models. Particularly, Bidirectional Long Short-Term Memory networks (BiLSTMs) with attention mechanisms have shown promising performance for NLI. In this paper, we propose a Combined Attention Model (CAM) for NLI. CAM combines the two attention mechanisms: intra- attention and inter-attention. The model first captures the semantics of the individual input premise and hypothesis with intra-attention and then aligns the premise and hypothesis with inter-sentence attention. We evaluate CAM on two benchmark datasets: Stanford Natural Language Inference (SNLI) and SciTail, achieving 86.14% accuracy on SNLI and 77.23% on SciTail. Further, to investigate the effectiveness of individual attention mechanism and in combination with each other, we present an analysis showing that the intra- and inter-attention mechanisms achieve higher accuracy when they are combined together than when they are independently used.
Author(s): Gajbhiye A, Jaf J, Al Moubayed N, Bradley S, McGough AS
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: IEEE International Conference on Big Data
Year of Conference: 2018
Online publication date: 10/12/2018
Acceptance date: 03/11/2018