CORTEX2 Publication

CORTEX2 Publication: Uni-SLAM: Uncertainty-Aware Neural Implicit SLAM for Real- Time Dense Indoor Scene Reconstruction

Our paper, “Uni-SLAM: Uncertainty-Aware Neural Implicit SLAM for Real- Time Dense Indoor Scene Reconstruction”, was presented at the IEEE/CVF Winter Conference on Applications of Computer Vision 2025 (WACV) — and won the “Best Academic Paper”- 9th International XR Metaverse Conference in Busan.

Abstract

Neural implicit fields have recently emerged as a powerful representation method for multi-view surface reconstruction due to their simplicity and state-of-the-art performance. However, reconstructing thin structures of indoor scenes while ensuring real-time performance remains a challenge for dense visual SLAM systems. Previous methods do not consider varying quality of input RGB-D data and employ fixed-frequency mapping process to reconstruct the scene, which could result in the loss of valuable information in some frames. In this paper, we propose Uni-SLAM, a decoupled 3D spatial representation based on hash grids for indoor reconstruction. We introduce a novel defined predictive uncertainty to reweight the loss function, along with strategic local-to-global bundle adjustment. Experiments on synthetic and real-world datasets demonstrate that our system achieves state-of-the-art tracking and mapping accuracy while maintaining real-time performance. It significantly improves over current methods with a 25% reduction in depth L1 error and a 66.86% completion rate within 1 cm on the Replica dataset, reflecting a more accurate reconstruction of thin structures.

Authors

Shaoxiang Wang, Yaxu Xie, Chun-Peng Chang, Christen Millerdurai, Alain Pagani, Didier Stricker

Read the full publication


Access all our CORTEX2 publications.


CORTEX2 Publication

CORTEX2 Publication: Beyond Classical Cooperation: Reimagining Business Meetings through VR Conferencing

Our paper, “Beyond Classical Cooperation: Reimagining Business Meetings through VR Conferencing”, was presented at the EuroXR 2025: Proceedings of the Application, Poster, and Demo Tracks of the 22nd EuroXR International Conference.

Abstract

This demo explores the use of Immersive conferencing in Virtual Reality (VR), designed to connect both virtual and non-virtual participants in a shared collaborative space to extend participation to the widest possible audience. It focuses primarily on business meeting scenarios, while remaining adaptable to other use cases. The aim is to enhance immersive collaboration, fostering stronger engagement and a greater sense of presence, especially valuable in today’s remote working environments. In addition, this approach enables interactive co-design on virtual objects and overcomes the limitations of traditional conferencing tools. The solution also brings significant environmental benefits by minimizing travel requirements, reducing carbon emissions and environmental impact. Built on a modular architecture, the application offers scalability and supports the addition of new functionalities.

Authors

Yazid Benazzouz, Alexis Lombard, Jean-Pierre Lorré, and Gaël de Chalendar

Read the full publication


Access all our CORTEX2 publications.


CORTEX2 Publication

CORTEX2 Publication: CoVA: LLM Data Augmentation to Enhance User-Defined Intents Recognition for Voice-Based Assistance in Collaborative XR Environments

Our paper, “CoVA: LLM Data Augmentation to Enhance User-Defined Intents Recognition for Voice-Based Assistance in Collaborative XR Environments”, was presented at the EuroXR 2025: Proceedings of the Application, Poster, and Demo Tracks of the 22nd EuroXR International Conference.

Abstract

In this work we present CoVA, a virtual assistant designed to participate in business meetings, remote training scenarios, and industrial collaboration sessions. To ensure versatility across use case scenarios, a virtual assistant should excel at supporting not only predefined conversational patterns for key information delivery and action triggering, but also dynamic navigation through shared documents and structured knowledge sources to remain aligned with user-provided materials related to the assisted work session (previous meeting reports, project-related content, domain-specific graphs). Offering both capabilities requires efficient and reliable intent classification to distinguish in-scope from out-of-scope queries and route them to the appropriate processing pipeline, despite potentially limited and heterogeneous intent examples. Thus, we present an intent-agnostic method to recognize user intent. Additionally, we describe how CoVA implements a Retrieval-Augmented Generation approach to provide information contained in shared documents.

Authors

Alexis Lombard, Galo Castillo-López, Nasredine Semmar, and Gaël de Chalendar

Read the full publication


Access all our CORTEX2 publications.


EABCT_CORTEX2_Azucena Garcia Palacios

Advancing mental health through digital innovation: Azucena García Palacios' keynote at EABCT 2025

On 4 September 2025, our colleague Azucena García Palacios, Director of the Laboratory of Psychology and Technology (Labpsitec) at Universitat Jaume I, was invited to deliver a keynote at the European Association for Behavioural and Cognitive Therapies (EABCT) Congress 2025 in Glasgow.

Her keynote, titled "Digital solutions for a global problem: improving access to mental health services through technology", addressed one of the most pressing challenges in healthcare today: the gap between available evidence-based treatments and people’s ability to access them.

Mental health as a global challenge

A significant proportion of the global population lives with a mental disorder, making it one of the leading causes of disability and disease burden. Despite the availability of effective treatments, health systems are often unable to provide adequate access. In high-income countries, for example, only around 25% of people with depression receive minimally adequate treatment. This gap is not only a health issue but also a social and ethical challenge, as untreated mental health conditions reduce quality of life, well-being, and equity.

Azucena highlighted that the problem lies not in the lack of effective treatments but in their limited accessibility. To address this, she called for innovative solutions that go beyond face-to-face therapy, diversifying how psychotherapy is delivered, ensuring scalability, and integrating and expanding the range of services offered.

EABCT_CORTEX2_Azucena Garcia Palacios
Azucena García Palacios, Director of the Laboratory of Psychology and Technology at UJI, presenting CORTEX2 at EABCT 2025.

The role of digital solutions and XR

In her keynote, Azucena presented digital innovations in mental health, ranging from internet-based interventions and apps to mixed reality and artificial intelligence. These tools offer unique opportunities to expand reach and flexibility, making care more accessible and sustainable for health systems.

She also introduced the potential of extended reality (XR) solutions, sharing insights from the CORTEX2 project. She explained how XR shared spaces could open new possibilities for delivering psychological interventions, improving access, and supporting more engaging therapeutic experiences. She also emphasised the vital role of psychological factors and user experience in CORTEX2 in ensuring that these technologies are effective and seamlessly integrated into healthcare.

"Innovations like XR and digital technologies can help us scale evidence-based treatments, making mental health care more integrated into people’s daily lives." – Azucena García Palacios

A platform for global exchange

The EABCT 2025 Congress gathered more than 2,000 delegates from all 56 EABCT associations and 60 countries, providing a remarkable platform for knowledge exchange and collaboration. Azucena’s keynote was part of a distinguished line-up of speakers addressing the future of behavioural and cognitive therapies and their intersection with technology.

Her contribution highlighted the importance of scientific evidence and implementation research in overcoming barriers to adoption and ensuring that digital health solutions can truly improve people's lives at scale.


Discover more about the impact of our work at CORTEX2, the results of our funded projects, and how we are contributing to the evolution of XR technologies in Europe:

Follow us on LinkedIn, Bluesky and X

Subscribe to our newsletter


CORTEX2 at EuroXR 2025

CORTEX2 at EuroXR 2025: Showcasing innovation and celebrating our final event

At CORTEX2, we are proud to have participated in EuroXR 2025, held from 3-5 September 2025 in Winterthur, Switzerland, at the ZHAW School of Management and Law. This year’s EuroXR marked a special milestone for us, as it served as the stage for our final event, organised in close collaboration with our sister project, SPIRIT. Together, we co-hosted a dedicated track titled: “Extended Collaborative Telepresence – How CORTEX² and SPIRIT are Advancing Innovation in XR.”

Organised by the European Association for eXtended Reality(EuroXR) and the Zurich University of Applied Sciences(ZHAW), EuroXR 2025 brought together researchers, industry experts, and innovators in Virtual, Augmented, and Mixed Reality from Europe and beyond.

The conference featured keynote speeches, plenary sessions, scientific and application tracks, as well as poster and demonstration sessions. Once again, it reaffirmed its status as a premier gathering for the extended reality (XR) community.

For us at CORTEX2, returning to EuroXR for the second year in a row, this edition was particularly meaningful. It allowed us to share the main achievements from three years of hard work, celebrate the outcomes of our funded projects, and showcase the impact of European innovation in XR.

“For us at CORTEX2, returning to EuroXR was particularly meaningful — it allowed us to share the main achievements from three years of hard work, celebrate the outcomes of our funded projects, and showcase the impact of European innovation in XR.”

CORTEX2 at EuroXR 2025
Assistants and panellists in our Special Session at EuroXR 2025.

Celebrating our XR innovators

One of the highlights of the event was the active participation of our XR innovators, the winners of our Open Calls 1 and 2, who travelled from across Europe to present the results of their projects.

“One of the highlights of EuroXR 2025 was the active participation of our XR innovators, who travelled from across Europe to present their solutions and showcase real-world applications.”

Through fast-track pitches, they introduced their solutions in concise presentations, and in poster and demo sessions, they showcased research outcomes, prototypes and real-world applications across different themes, such as: “XR Technology and Infrastructure”, “Enhancement of XR Objects, Representations and Experiences”, “XR Training in/for the Manufacturing Domain / Robot Tele-Operation”, and “Health, Entertainment and Social XR Applications”.

These sessions offered a unique opportunity to highlight the impact of our 30 funded projects, developed through the CORTEX2 Support Programme, and to give visibility to their results within the XR community. In total, over 50 funded initiatives from the CORTEX2 and SPIRIT cascade funding programmes were represented.

CORTEX2 and SPIRIT innovators demo sessions at EuroXR 2025
CORTEX2 and SPIRIT innovators’ demo sessions at EuroXR 2025.

Spotlight on our pilots

Another key moment was the presentation of our three pilots, focused on AR Industrial Remote Cooperation, VR Remote Technical Training, and VR Business Meetings, to an audience of approximately 200 experts.

These pilots have played a crucial role in demonstrating the integration of the CORTEX2 framework in real-world contexts, while assessing not only the technical performance but also the human, social, and societal impact of XR-based collaboration.

“Our pilots demonstrated the integration of the CORTEX2 framework in real-world contexts, assessing not only technical performance but also human, social, and societal impact.”

CORTEX2 and SPIRIT Special Session at EuroXR 2025.
CORTEX2 and SPIRIT Special Session at EuroXR 2025.

Discussing the future of XR

Our coordinator, Alain Pagani, Principal Researcher of Computer Vision at the German Research Center for Artificial Intelligence (DFKI), took part in an inspiring session on the future of XR, with leading voices from other European-funded projects.

The panel was moderated by Krzysztof Walczak, from the Poznan University of Economics and Business, and featured valuable insights from Peter Van Daele (SPIRIT project & Ghent University), Vincenzo Croce (SUN project), Muhammad Zeshan Afzal (LUMINOUS project & DFKI), Arta Ertekin (OPENVERSE project & Martel Innovate), Didier Stricker (SHARESPACE project & DFKI), and Anastasia Sergeeva (THEIA-XR project & University of Luxembourg).

Together, they explored key questions on the opportunities and challenges ahead for XR: How will advances in AI shape the future of XR experiences and applications? What could help XR move from niche use to broader acceptance in society and industry? How should Europe position itself in the global XR landscape?

EU projects Future of XR session at EuroXR 2025.
EU projects “Future of XR” session at EuroXR 2025.

Strengthening collaboration with SPIRIT

The success of our final event was made possible through our close partnership with the SPIRIT project. Together, we organised and co-hosted our track, which marked the final event for both our projects.

“Together with SPIRIT, we showed the power of EU-funded projects working together to amplify their impact and scale their results.”

This collaboration demonstrates the power of EU-funded projects working together to amplify their impact and scale their results, just as we do in our Beyond XR cluster. This initiative, now bringing together more than 14 European projects, received a special mention from Anne Bajart, Deputy Head of Unit at DG CONNECT of the European Commission. In her keynote speech, she highlighted it as a great example of a community-driven initiative and expressed her hope for its continued activity and future success.

This joint achievement would not have been possible without the outstanding work of our CORTEX2 innovators, who presented their groundbreaking results; the dedication of our CORTEX2 team over the past three years, as well as the commitment of our pilots team, and the mentors in our Support Programme; and the EuroXR Association for providing such an excellent platform to bring the XR community together.

“This joint achievement would not have been possible without the outstanding work of our innovators, the dedication of our team, and the collaboration fostered through EuroXR.”

Together, we have shown the strength of European collaboration in shaping the future of XR!

CORTEX2 team at EuroXR 2025.
The CORTEX2 team at EuroXR 2025.

Discover more about the impact of our work, the results of our funded projects, and how we are contributing to the continued evolution of XR technologies in Europe:

Follow us on LinkedIn, Bluesky and X

Subscribe to our newsletter


CORTEX2_X_SPIRIT-

Agenda: CORTEX2 and SPIRIT's final event at EuroXR 2025

At CORTEX2, we are excited to participate in the 22nd EuroXR International Conference – EuroXR 2025, taking place on 3-5 September 2025 at the ZHAW School of Management and Law in Winterthur, Switzerland.

In a major joint effort with the SPIRIT project, CORTEX2 will co-host a special session titled: "Extended Collaborative Telepresence – How CORTEX2 and SPIRIT are Advancing Innovation in XR."

This unique track will serve as the final event for both projects, showcasing the outcomes of their cascade funding programmes, with over 50 funded initiatives represented. The session will feature research outcomes, demonstrations, and real-world applications, offering valuable insights into how European projects are shaping the future of XR.

The EuroXR 2025 conference, co-organised by the European Association for eXtended Reality (EuroXR) and ZHAW Zurich University of Applied Sciences, provides a dynamic platform for knowledge exchange and collaboration among researchers, industry professionals, and technology experts working in Virtual, Augmented, and Mixed Reality.

The conference programme will feature:

  • Keynote speeches from XR thought leaders
  • Plenary sessions on emerging trends
  • Scientific and application tracks, alongside poster and demo sessions

The CORTEX2 and SPIRIT projects will take part in the panel "Future of XR" on Friday, 5 September, along with other key European projects in the area, including Openverse, LUMINOUS, and THEIA-XR.

Register now!

This event marks a milestone in celebrating the impact of CORTEX2’s work, the achievements of its funded projects, and the continued evolution of XR technologies in Europe.

Join us in Winterthur to explore the future of XR!


Cortex2_Flyer_Agenda_Print
CORTEX2 innovators_2nd progress update_PETER

CORTEX2 innovators: PETER's 2nd progress update

Q: What has PETER achieved now that the CORTEX2 Support Programme is complete?

A: We were able to achieve or exceed all KPIs. At the end of the programme, we have a complete voice-to-voice pipeline that translates from English to French, German, and vice versa in three seconds, preserving urgency tone. PETER works as a stand-alone but is ready for integration in the Rainbow platform. Moreover, in addition to the contract work, we added Italian as a fourth language and prepared the pipeline to detect emotions and not just urgency. For the Cortex2 community, but also for the B2B public, PETER can unlock new scenarios focused on personal safety and security, emergency handling, and work in potentially hazardous scenarios.

Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?

A: Apart from the undeniable effect of financial support, we believe that contacts with Cortex2 people (our mentor, tech guys at Alcatel-Lucent, the project coordinator) were a continuous stimulus to keep going, overcome difficulties and exceed what we promised. Also, the contact with other open call winners has been interesting, as we discovered a real interest in PETER and its application potential.

Q: What is the status of PETER after completing the Programme? What are your next steps?

A: We can declare a final TRL 5. The pipeline is completely working from a technical point of view and validated with a satisfactory psychometric assessment. We would like to continue the activity on PETER, adding more languages and the ability to transfer emotion, balancing privacy concerns with usefulness in real situations. We may also explore how to extend PETER to less represented languages, like those in African countries or in certain areas of Eastern Europe.


Discover more about PETER and stay informed about its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_2nd progress update_VIRTEX

CORTEX2 innovators: VIRTEX's 2nd progress update

Q: What has VIRTEX achieved now that the CORTEX2 Support Programme is complete?

A: Throughout the CORTEX2 Support Programme, VIRTEX has made significant progress in developing its no-code XR authoring platform. Key milestones include the implementation of real-time collaboration features (chat, communication bubbles, scene synchronisation), integration of a Cortex Virtual Assistant for scenario creation, and an IoT interface for context-aware simulations. Pilot testing at Ludwig Maximilians University validated the platform’s usability, while feedback has informed iterative improvements. We’ve also advanced dissemination through EuroXR 2025 and deepened collaboration with other CORTEX2 projects. An initial exploitation plan is in place, with defined IP terms and licensing options, paving the way for market entry.

Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?

A: The mentorship and technical guidance have helped us refine key components like real-time collaboration, scenario logic, and user management. The access to a multidisciplinary network, ranging from fellow innovators to XR experts, fostered meaningful collaborations, particularly in gesture recognition and avatar interaction. Finally, the structured feedback from pilot testing and the business training sessions supported our exploitation planning, helping us shape a realistic path to market.

Q: What is the status of VIRTEX after completing the Programme? What are your next steps?

A: VIRTEX is now a functional prototype validated through academic pilot testing. Based on positive pilot feedback, we are now focusing on what the market demands. We are exploring refining the user experience, expanding content libraries, and integrating AI-driven avatars. Our next steps include finalising our go-to-market strategy and securing strategic partnerships to support scale-up and commercialisation.


Discover more about VIRTEX and stay informed about its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators: FLYTEX's 3rd progress update

In Sprint 3 of the CORTEX2 Support programme, FLYTEX delivered a fully functional prototype that streams real-time IoT data into a web XR environment, proving both technical robustness and readiness for real-world deployment.

Read on to learn about FLYTEX's latest breakthroughs — and what’s next!

FLYTEX's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise FLYTEX's latest developments during Sprint 3 of the CORTEX2 programme?

A: During Sprint 3, we were able to deploy and optimise a working prototype capable of sending real-time data from multiple devices and seamlessly integrating this data into the CORTEX2 platform, showing the data in a web XR environment. This prototype not only ensured the accurate and timely transmission of sensor readings but also validated the compatibility of our system with CORTEX2’s data ingestion pipelines. By utilising robust communication protocols and ensuring adherence to platform requirements, we laid the groundwork for scalable and efficient device integration.

CORTEX2 innovators_3rd progress update_FLYTEX_Test_deployment

Q: What milestones did FLYTEX reach during Sprint 3, and what impact do they have?

A: The key milestones were finalisation of the prototype and its deployment in an agricultural environment, ensuring data integrity and real-time responsiveness across the scenario.
We succeeded to comply with the three defined KPIs (integration of at least 6 IoT device types, at least 2 IoT devices in a meeting room, IoT data upload under 45’’).

Q: What are FLYTEX's next steps?

A: We are working on a business plan to offer the developed feature as part of Flythings' services.

To end, we would like to thank our mentors and the CORTEX2 team for their support.


Check out FLYTEX's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_3rd progress update_SENSO3D

CORTEX2 innovators: SENSO3D's 3rd progress update

SENSO3D has wrapped up Sprint 3 of the CORTEX2 Support Programme with major advances in AI-driven 3D content creation, creating tools that make building immersive AR/VR spaces easier, smarter, and more accessible than ever.

Read on to learn about SENSO3D's latest breakthroughs — and what’s next!

SENSO3D's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise SENSO3D's latest developments during Sprint 3 of the CORTEX2 programme?

A: In Sprint 3, SENSO3D took a confident leap toward making 3D content creation accessible and smart. We finalised our 3D model library with over 1,000 structured assets, refined our AI tools to detect and reconstruct objects from 2D images with impressive accuracy, and developed a new prompt-based scene generation tool that can bring environments to life with just a line of text. These achievements bring us closer to our vision: intuitive and powerful tools for creating immersive AR/VR spaces.

https://www.youtube.com/watch?v=scb8PTrogL8

Q: What milestones did SENSO3D reach during Sprint 3, and what impact do they have?

A: Some of our most exciting milestones included finalising the 3D Unity-ready model library, launching a working prototype of our image-based 3D search engine, and successfully testing our prompt-based scene creation tool with real users. We have completed the integration of our models into Unity and WebXR, making our tools ready for use in real-world virtual environments. These steps significantly lower the barriers for developers, designers, and educators to build rich XR experiences, opening the door to creative, interactive, and highly customizable virtual spaces.

Q: What are SENSO3D's next steps?

A: Next, we’re focusing on fine-tuning our tools based on user feedback, expanding our scene customisation features, and preparing our system for broader deployment within the CORTEX2 ecosystem. We’re also excited to collaborate more closely with other pilot teams, making sure our tools are easy to adopt and integrate.

We’d like to thank the CORTEX2 mentors and community for their valuable feedback and encouragement. Their support has helped us sharpen our ideas and keep pushing forward. With the finish line in sight, we’re more excited than ever to share what’s coming next.

https://www.youtube.com/watch?v=HXM7Pb6M8WA

 


Check out SENSO3D's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 Publication

CORTEX2 Publication: Intent Recognition and Out-of-Scope Detection using LLMs in Multi-party Conversations

Our paper, “Intent Recognition and Out-of Scope Detection using LLMs in Multi-party Conversations”, was presented at the Proceedings of The 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2025).

Abstract

Intent recognition is a fundamental component in task-oriented dialogue systems (TODS). Determining user intents and detecting whether an intent is Out-of-Scope (OOS) is crucial for TODS to provide reliable responses. However, traditional TODS require large amount of annotated data. In this work we propose a hybrid approach to combine BERT and LLMs in zero and few-shot settings to recognize intents and detect OOS utterances. Our approach leverages LLMs generalization power and BERT’s computational efficiency in such scenarios. We evaluate our method on multi-party conversation corpora and observe that sharing information from BERT outputs to LLMs leads to system performance improvement.

Authors

Galo Castillo-López, Gaël de Chalendar, Nasredine Semmar

Read the full publication


Access all our CORTEX2 publications.


CORTEX2 innovators_3rd progress update_VISOR_dashboard

CORTEX2 innovators: VISOR's 3rd progress update

VISOR wraps up Sprint 3 of the CORTEX2 Support Programme, reaching a major milestone — delivering a powerful 3D reconstruction service that transforms small objects into high-quality digital twins.

validated, user-tested gesture recognition system. Enhanced accuracy, flexibility, and responsiveness now pave the way for broader adoption and integration.

Read on to learn about VISOR's latest breakthroughs — and what’s next!

VISOR's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise VISOR's latest developments during Sprint 3 of the CORTEX2 programme?

A: The primary objective of the last Sprint was to validate the developed 3D reconstruction software service and assess its ability to efficiently reconstruct small objects in 3D. The service is accessible through a user-friendly web portal, where users can upload their 2D images or video sequences to generate accurate digital twins of small physical objects through geometric reconstruction and colour information extraction.

During this phase, we tweaked and finalised our 3D reconstruction module and created a comprehensive documentation of the VISOR service. We created guidelines that were published on the VISOR web portal to help users capture their small objects and maximise the quality of the 3D reconstructed output.

CORTEX2 innovators_3rd progress update_VISOR_test_plant

Q: What milestones did VISOR reach during Sprint 3, and what impact do they have?

A: In this final Sprint, our project reached the final key milestone “MS3 - Final Product”, where we have successfully delivered a 3D reconstruction service that efficiently reconstructs small physical objects into high-quality 3D models. We have validated the performance of our service and the quality of the generated 3D objects by testing it on a diverse range of objects, varying in both geometric complexity and texture detail. We have also achieved all the specified KPIs, offering a service that delivers fast reconstruction times, supports input from both images and video sequences, is validated on more than 30 small physical objects and generates a high-quality 3D textured model, ready to be shared and used by various 3D engines on Desktop, Web or VR/AR/XR environments.

Q: What are VISOR's next steps?

A: Our next step is to collaborate with other CORTEX2 Open Call winners that we connected with during an internal matchmaking event, and exploit the VISOR service into their use-cases. In parallel, we plan to implement the business plan we have prepared to further exploit VISOR commercially, as well as in future research and projects.

We would like to highlight the productive and supportive collaboration with the CORTEX2 team throughout the project. Their continuous support and guidance helped us develop an efficient 3D reconstruction service that reconstructs small physical objects into shareable high-quality 3D textured models.


Check out VISOR's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


Subscribe to our newsletter

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement N° 101070192. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Union’s Horizon Europe research and innovation programme. Neither the European Union nor the granting authority can be held responsible for them.