Skip to Main content Skip to Navigation
Conference papers

Benchmarking Transformers-based models on French Spoken Language Understanding tasks

Oralie Cattan 1, 2 Sahar Ghannay 1 Christophe Servan 1, 3 Sophie Rosset 1 
1 ILES - Information, Langue Ecrite et Signée
LISN - Laboratoire Interdisciplinaire des Sciences du Numérique, STL - Sciences et Technologies des Langues
Abstract : In the last five years, the rise of the self-attentional Transformerbased architectures led to state-of-the-art performances over many natural language tasks. Although these approaches are increasingly popular, they require large amounts of data and computational resources. There is still a substantial need for benchmarking methodologies ever upwards on under-resourced languages in data-scarce application conditions. Most pre-trained language models were massively studied using the English language and only a few of them were evaluated on French. In this paper, we propose a unified benchmark, focused on evaluating models quality and their ecological impact on two well-known French spoken language understanding tasks. Especially we benchmark thirteen well-established Transformer-based models on the two available spoken language understanding tasks for French: MEDIA and ATIS-FR. Within this framework, we show that compact models can reach comparable results to bigger ones while their ecological impact is considerably lower. However, this assumption is nuanced and depends on the considered compression method.
Complete list of metadata
Contributor : Christophe Servan Connect in order to contact the contributor
Submitted on : Tuesday, July 19, 2022 - 10:52:57 AM
Last modification on : Friday, August 5, 2022 - 9:27:31 AM


Files produced by the author(s)


  • HAL Id : hal-03715340, version 2


Oralie Cattan, Sahar Ghannay, Christophe Servan, Sophie Rosset. Benchmarking Transformers-based models on French Spoken Language Understanding tasks. INTERSPEECH 2022, Sep 2022, Incheon, South Korea. ⟨hal-03715340v2⟩



Record views


Files downloads