We have published a paper at the Leading and Managing in the Digital Era (LMDE 2023) International Conference, “CORTEX2 – Extended collaborative telepresence for future work and education”. Our colleague Vasileios Theodorou, Master Research Engineer at Intracom Telecom, will present it at the LMDE Research Colloquium, which will take place from June 21 to 23 on the Greek island of Syros.
Abstract
As a consequence of the COVID-19 pandemic, the possibility of working remotely or in “home office” has become mainstream for a large number of companies. A wide range of European companies shifted their activities to remote working due to social-distancing restrictions and safety measures yielding telepresence to become the new deal for many specific workforce segments. Around 5% of Europeans regularly worked from home before the pandemic, while that figure has now risen to around 12.3%. However, this trend is here to stay, as 97% of workers would like to work remotely at least some of the time for the rest of their careers. Teams have started to take up collaborative remote working environments, adopting online tools such as video or teleconferencing systems and using distributed project/task management platforms and virtual whiteboards. Although basic digital features are gaining popularity, available market services and applications are not yet adequate to efficiently support activities that imply physical interaction with remote objects. Some critical examples include skills training for production sites, remote support for complex maintenance tasks or activity planning dependent on local physical configurations. Even for standard business meetings, the currently available solutions are often unsatisfactory because they are limited to participants using devices having all identical capabilities.
The new digital era offers more than only exchanging audio and video streams for collaboration. We currently witness the emergence of extended reality (XR) in both Augmented Reality (AR) and Virtual Reality (VR) variants, and concepts such as digital twins for factories and production sites have gained attraction. However, the practical implementation necessitates the digitalisation, calibration, storage and preparation of existing assets, making these tools out of reach for many small and medium enterprises.
In the recently launched European project CORTEX2, we are setting the basis for future extended collaborative telepresence to allow for remote cooperation in virtually all industrial and business sectors, both for productive work and education and training. Our idea merges the concepts of classical video conferencing with extended reality, where real assets such as objects, machines or environments can be digitalised and shared with distant users for teamworking in a real-virtual continuous space.
In essence, the CORTEX2 framework will allow to create shared working experiences between multiple distant users in different operating modes. In the Virtual Reality mode, participants will be able to create virtual meeting rooms where each user is represented by a virtual avatar. They will have the possibility to appear as video-based holograms in the virtual rooms, with an option to anonymise their appearance using an AI-based video appearance generator while keeping their original facial expressions. Participants will also be able to exchange documents, 3D objects and other assets. They will be accompanied by an AI-powered meeting assistant with extended capabilities such as natural speech interaction, meeting summarisation or translation.
In the Augmented Reality mode, participants will have the possibility to share their immediate surroundings through a simplified digitalisation process, which will result in a textured 3D model of their environments. This model will be used by distant users to identify, select and point to specific areas. In turn, these areas will be highlighted in the original users’ view using Augmented Reality techniques (virtual arrows, virtual highlight).
In order to make the experience more immersive, rich contextual IoT information will be integrated into video streams, rendered as AR annotations on top of displayed objects and persons. To this end, data gathered from a multitude of heterogeneous IoT devices will be ingested, aggregated processed and prepared, ultimately generating layers of insightful information related to smart assets of various vertical domains. To this end, a versatile IoT Platform will be developed to collect data from connected devices and sensors and bring them into a unified, IoT-protocol-agnostic view that will allow the seamless management of IoT information and its custom “shaping” into layers of aggregated IoT information.
In order to demonstrate the added value of the CORTEX2 platform, three practical usages have been selected and will be deployed as representative use cases.
Remote industrial maintenance use case in Augmented Reality
We will demonstrate that an XR immersive experience can be reached with heterogeneous and off-the-shelf mobile devices and limited bandwidth conditions while improving productivity and reducing environmental footprint. The idea is that the performance of industrial maintenance tasks could be strongly increased if technicians can receive live contextual advice augmented with information retrieved and computed automatically and displayed in a relevant manner. On the other side of the communication link, an expert will have an immersive view of the maintenance environment to understand and fix the problem. This experience will showcase the benefit of several technologies, such as gesture analysis and scene semantics analysis to inject annotations in video streams, audio transcription and voice commands to control the immersive environment and document and record the intervention, support and mixing of multiple videos and IoT data sources from non-immersive devices to compose an on-demand immersive collaboration space with augmented data.
Remote technical training use case in Virtual Reality
We will demonstrate that VR/AR allows efficient knowledge transmission in one-to-many situations where the remote instructor can simultaneously help several trainees while referring to physical objects such as industrial equipment. The expected solution shall allow the deployment on different hardware and be able to run a varied number of scenarios as well as adaptable to multiple topologies. This use case will demonstrate the usefulness of interaction with a 3D model of the machine for increased training success rate, on-demand display of overlayed information, attention guidance through emphasis on some parts of the model, use of animations and integration of multiple media types.
Virtual business meetings in Virtual Reality
We will demonstrate that VR/AR enriched business meetings allow seamless integration of remote participants and improve productivity. This tool will facilitate the integration of remote participants using virtual and augmented reality techniques on the one hand, to provide remote users with a perception of visual and auditory immersion close to real presence; on the other hand, to offer a representation of the remote person to the other participants of the meeting. This use case will demonstrate advanced augmented reality features such as visual and audio immersion of the remote user, overlay display for the visualisation of information concerning both the collaboration’s participants (name, function, profiles, etc.) and the interaction itself, and video-based or rendering based representation of the remote user, with symbolic transcription of the non-verbal communication acts of the remote person.
From a technical perspective, the framework will build upon an existing videoconferencing platform (Rainbow from Alcatel Lucent Entreprise) while extending it with XR support and additional services. The framework is designed to be device agnostic, with clearly defined APIs which allow the integration of novel devices as requested by the users. As novel technologies are prone to raise concerns about societal implications and ethical issues, the team of experts developing the framework include experts in legal and ethical issues, as well as experts in psychological and social issues influencing XR interactions.
The CORTEX2 platform is designed to be extensible through third parties in order to facilitate the addition of novel features. During the project, two open calls for participation will be launched, which will allow for the development of additional services, and the validation through novel use cases.
Authors
Alain Pagani, Narek Minaskan, Alireza Javanmardi, Yaxu Xie, German Research Center for Artificial Intelligence (DFKI)
Sylvain Rivier, Vincent Bailleau, Emmanuel Helbert, Pierre-Yves Noel, Alcatel Lucent Enterprise
Yazid Benazzouz, Jean-Pierre Lorré, Linagora
Franklyn Ohai, Maja Nisevic, Anton Vedder, KU Leuven
Gael de Chalendar, French Alternative Energies and Atomic Energy Commission (CEA)
Vasileios Theodorou, Ilia Pietri, Maria-Evgenia Xezonaki, Intracom Telecom
Florian Andres, Lydia Szymendera, Olivier Grzelak, Actimage
Keywords
Extended Reality, Telecooperation, Telepresence.
Subscribe to our newsletter and follow us on LinkedIn and Twitter to stay updated with our upcoming publications!