Our paper, “CoVA: LLM Data Augmentation to Enhance User-Defined Intents Recognition for Voice-Based Assistance in Collaborative XR Environments”, was presented at the EuroXR 2025: Proceedings of the Application, Poster, and Demo Tracks of the 22nd EuroXR International Conference.

Abstract

In this work we present CoVA, a virtual assistant designed to participate in business meetings, remote training scenarios, and industrial collaboration sessions. To ensure versatility across use case scenarios, a virtual assistant should excel at supporting not only predefined conversational patterns for key information delivery and action triggering, but also dynamic navigation through shared documents and structured knowledge sources to remain aligned with user-provided materials related to the assisted work session (previous meeting reports, project-related content, domain-specific graphs). Offering both capabilities requires efficient and reliable intent classification to distinguish in-scope from out-of-scope queries and route them to the appropriate processing pipeline, despite potentially limited and heterogeneous intent examples. Thus, we present an intent-agnostic method to recognize user intent. Additionally, we describe how CoVA implements a Retrieval-Augmented Generation approach to provide information contained in shared documents.

Authors

Alexis Lombard, Galo Castillo-López, Nasredine Semmar, and Gaël de Chalendar

Read the full publication


Access all our CORTEX2 publications.

Subscribe to our newsletter

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement N° 101070192. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Union’s Horizon Europe research and innovation programme. Neither the European Union nor the granting authority can be held responsible for them.