Objective of this Experiment

The primary objective of this experiment is to use Deep Learning algorithms to solve the Closed-Domain Question Answering problem for business documents or insights reports. Here, we have analysed and reviewed various algorithms in order to make a Question Answering System. The use of attention-based transformers allows us to expand and apply this model to a variety of question answering tasks.

Further, we will explore multiple approaches to identify sections and sub-sections within a PDF document using HTML and computer vision.

Business Use Cases and Applications

There are multiple direct and indirect applications of this experiment including:

  • Semantic Search System: Semantic search refers to the ability of search engines to consider the intent and contextual meaning of search phrases when serving content to users on the web. At one time, search engines could only analyse the exact phrasing of a search term when matching results with a search query.
  • Information Extraction: Intelligent automated extraction of relevant information and insights from one or multiple documents using the latest advances in deep learning.
  • FAQ Style Question Answering: Leveraging existing FAQ documents and semantic similarity search to answer new incoming questions.

Dataset

SQuAD – The Stanford Question Answering Dataset (SQuAD) is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it.

WIKITQ – This dataset consists of complex questions on Wikipedia tables. Crowd workers were asked, given a table, to compose a series of complex questions that include comparisons, superlatives, aggregation or arithmetic operation. The questions were then verified by other crowd workers.

SQA – This dataset was constructed by asking crowd workers to decompose a subset of highly compositional questions from WIKITQ, where each resulting decomposed question can be answered by one or more table cells. The final set consists of 6, 066 question sequences.

Domain Specific PDFs – For this experiment, we have used 8 Sustainability Trends reports from Amplifi PRO. As a result, all rules and customisation have been done around these reports for extracting answers.

Experiment Outcomes

We were able to find answers from our dataset with an overall accuracy of ~85% based on sample questions for selected reports.

Do you have an AI use case you want to explore?

If you’ve got a specific AI use case that you’d like some help exploring, if you’re interested in collaborating with us as partners, or if you’re just interested in finding viable and effective ways to apply AI in your organisation, speak to us today.

hbspt.forms.create({ portalId: "2495356", formId: "3c5c77b2-406f-4cd8-a872-c3aa17c17f73" });