Unsupervised multiple-choice question generation for out-of-domain Q&A fine-tuning
Guillaume Le Berre (guillaume (point) le_berre <at> depinfonancy (point) net)
Wednesday 22 September 2021 at 11:30 AM
Réunion Zoom (below)
Pre-trained models have shown very good performances on a number of question answering benchmarks especially when fine-tuned on multiple question answering datasets at once. In this work, we generate a fine-tuning dataset model thanks to a rule-based algorithm that generates questions and answers from unannotated sentences. We show that the state-of-the-art model UnifiedQA can greatly benefit from such a system on a multiple-choice benchmark about physics, biology and chemistry it has never been trained on. We further show that improved performances may be obtained by selecting the most challenging distractors (wrong answers), with a dedicated ranker based on a pretrained RoBERTa model.
La présentation sera donnée en français.
To receive weekly talk announcements, please send an e-mail to firstname.lastname@example.org. Simply write a message containing the single line 'subscribe ralli' (without the quotes).
See all the weekly talks for the year:1991 1992 1993 1994 1995 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021