Attention Modulation for Zero-Shot Cross-Domain Dialogue State Tracking - Information, Langue Ecrite et Signée Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Attention Modulation for Zero-Shot Cross-Domain Dialogue State Tracking

Résumé

Dialog state tracking (DST) is a core step for task-oriented dialogue systems aiming to track the user's current goal during a dialogue. Recently a special focus has been put on applying existing DST models to new domains, in other words performing zero-shot crossdomain transfer. While recent state-of-theart models leverage large pre-trained language models, no work has been made on understanding and improving the results of first-developed zero-shot models like SUMBT. In this paper, we thus propose to improve SUMBT zero-shot results on MultiWOZ by using attention modulation during inference. This method improves SUMBT zero-shot results significantly on two domains and does not worsen the initial performance with the significant advantage of needing no additional training.
Fichier principal
Vignette du fichier
2022.codi-1.11.pdf (445.12 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03839919 , version 1 (04-11-2022)

Identifiants

  • HAL Id : hal-03839919 , version 1

Citer

Mathilde Veron, Guillaume Bernard, Olivier Galibert, Sophie Rosset. Attention Modulation for Zero-Shot Cross-Domain Dialogue State Tracking. 3rd Workshop on Computational Approaches to Discourse (CODI) at COLING 2022, Oct 2022, Gyeongju, South Korea. ⟨hal-03839919⟩
47 Consultations
13 Téléchargements

Partager

Gmail Facebook X LinkedIn More