EABCT_CORTEX2_Azucena Garcia Palacios

Advancing mental health through digital innovation: Azucena García Palacios' keynote at EABCT 2025

On 4 September 2025, our colleague Azucena García Palacios, Director of the Laboratory of Psychology and Technology (Labpsitec) at Universitat Jaume I, was invited to deliver a keynote at the European Association for Behavioural and Cognitive Therapies (EABCT) Congress 2025 in Glasgow.

Her keynote, titled "Digital solutions for a global problem: improving access to mental health services through technology", addressed one of the most pressing challenges in healthcare today: the gap between available evidence-based treatments and people’s ability to access them.

Mental health as a global challenge

A significant proportion of the global population lives with a mental disorder, making it one of the leading causes of disability and disease burden. Despite the availability of effective treatments, health systems are often unable to provide adequate access. In high-income countries, for example, only around 25% of people with depression receive minimally adequate treatment. This gap is not only a health issue but also a social and ethical challenge, as untreated mental health conditions reduce quality of life, well-being, and equity.

Azucena highlighted that the problem lies not in the lack of effective treatments but in their limited accessibility. To address this, she called for innovative solutions that go beyond face-to-face therapy, diversifying how psychotherapy is delivered, ensuring scalability, and integrating and expanding the range of services offered.

EABCT_CORTEX2_Azucena Garcia Palacios
Azucena García Palacios, Director of the Laboratory of Psychology and Technology at UJI, presenting CORTEX2 at EABCT 2025.

The role of digital solutions and XR

In her keynote, Azucena presented digital innovations in mental health, ranging from internet-based interventions and apps to mixed reality and artificial intelligence. These tools offer unique opportunities to expand reach and flexibility, making care more accessible and sustainable for health systems.

She also introduced the potential of extended reality (XR) solutions, sharing insights from the CORTEX2 project. She explained how XR shared spaces could open new possibilities for delivering psychological interventions, improving access, and supporting more engaging therapeutic experiences. She also emphasised the vital role of psychological factors and user experience in CORTEX2 in ensuring that these technologies are effective and seamlessly integrated into healthcare.

"Innovations like XR and digital technologies can help us scale evidence-based treatments, making mental health care more integrated into people’s daily lives." – Azucena García Palacios

A platform for global exchange

The EABCT 2025 Congress gathered more than 2,000 delegates from all 56 EABCT associations and 60 countries, providing a remarkable platform for knowledge exchange and collaboration. Azucena’s keynote was part of a distinguished line-up of speakers addressing the future of behavioural and cognitive therapies and their intersection with technology.

Her contribution highlighted the importance of scientific evidence and implementation research in overcoming barriers to adoption and ensuring that digital health solutions can truly improve people's lives at scale.


Discover more about the impact of our work at CORTEX2, the results of our funded projects, and how we are contributing to the evolution of XR technologies in Europe:

Follow us on LinkedIn, Bluesky and X

Subscribe to our newsletter


CORTEX2 at EuroXR 2025

CORTEX2 at EuroXR 2025: Showcasing innovation and celebrating our final event

At CORTEX2, we are proud to have participated in EuroXR 2025, held from 3-5 September 2025 in Winterthur, Switzerland, at the ZHAW School of Management and Law. This year’s EuroXR marked a special milestone for us, as it served as the stage for our final event, organised in close collaboration with our sister project, SPIRIT. Together, we co-hosted a dedicated track titled: “Extended Collaborative Telepresence – How CORTEX² and SPIRIT are Advancing Innovation in XR.”

Organised by the European Association for eXtended Reality(EuroXR) and the Zurich University of Applied Sciences(ZHAW), EuroXR 2025 brought together researchers, industry experts, and innovators in Virtual, Augmented, and Mixed Reality from Europe and beyond.

The conference featured keynote speeches, plenary sessions, scientific and application tracks, as well as poster and demonstration sessions. Once again, it reaffirmed its status as a premier gathering for the extended reality (XR) community.

For us at CORTEX2, returning to EuroXR for the second year in a row, this edition was particularly meaningful. It allowed us to share the main achievements from three years of hard work, celebrate the outcomes of our funded projects, and showcase the impact of European innovation in XR.

“For us at CORTEX2, returning to EuroXR was particularly meaningful — it allowed us to share the main achievements from three years of hard work, celebrate the outcomes of our funded projects, and showcase the impact of European innovation in XR.”

CORTEX2 at EuroXR 2025
Assistants and panellists in our Special Session at EuroXR 2025.

Celebrating our XR innovators

One of the highlights of the event was the active participation of our XR innovators, the winners of our Open Calls 1 and 2, who travelled from across Europe to present the results of their projects.

“One of the highlights of EuroXR 2025 was the active participation of our XR innovators, who travelled from across Europe to present their solutions and showcase real-world applications.”

Through fast-track pitches, they introduced their solutions in concise presentations, and in poster and demo sessions, they showcased research outcomes, prototypes and real-world applications across different themes, such as: “XR Technology and Infrastructure”, “Enhancement of XR Objects, Representations and Experiences”, “XR Training in/for the Manufacturing Domain / Robot Tele-Operation”, and “Health, Entertainment and Social XR Applications”.

These sessions offered a unique opportunity to highlight the impact of our 30 funded projects, developed through the CORTEX2 Support Programme, and to give visibility to their results within the XR community. In total, over 50 funded initiatives from the CORTEX2 and SPIRIT cascade funding programmes were represented.

CORTEX2 and SPIRIT innovators demo sessions at EuroXR 2025
CORTEX2 and SPIRIT innovators’ demo sessions at EuroXR 2025.

Spotlight on our pilots

Another key moment was the presentation of our three pilots, focused on AR Industrial Remote Cooperation, VR Remote Technical Training, and VR Business Meetings, to an audience of approximately 200 experts.

These pilots have played a crucial role in demonstrating the integration of the CORTEX2 framework in real-world contexts, while assessing not only the technical performance but also the human, social, and societal impact of XR-based collaboration.

“Our pilots demonstrated the integration of the CORTEX2 framework in real-world contexts, assessing not only technical performance but also human, social, and societal impact.”

CORTEX2 and SPIRIT Special Session at EuroXR 2025.
CORTEX2 and SPIRIT Special Session at EuroXR 2025.

Discussing the future of XR

Our coordinator, Alain Pagani, Principal Researcher of Computer Vision at the German Research Center for Artificial Intelligence (DFKI), took part in an inspiring session on the future of XR, with leading voices from other European-funded projects.

The panel was moderated by Krzysztof Walczak, from the Poznan University of Economics and Business, and featured valuable insights from Peter Van Daele (SPIRIT project & Ghent University), Vincenzo Croce (SUN project), Muhammad Zeshan Afzal (LUMINOUS project & DFKI), Arta Ertekin (OPENVERSE project & Martel Innovate), Didier Stricker (SHARESPACE project & DFKI), and Anastasia Sergeeva (THEIA-XR project & University of Luxembourg).

Together, they explored key questions on the opportunities and challenges ahead for XR: How will advances in AI shape the future of XR experiences and applications? What could help XR move from niche use to broader acceptance in society and industry? How should Europe position itself in the global XR landscape?

EU projects Future of XR session at EuroXR 2025.
EU projects “Future of XR” session at EuroXR 2025.

Strengthening collaboration with SPIRIT

The success of our final event was made possible through our close partnership with the SPIRIT project. Together, we organised and co-hosted our track, which marked the final event for both our projects.

“Together with SPIRIT, we showed the power of EU-funded projects working together to amplify their impact and scale their results.”

This collaboration demonstrates the power of EU-funded projects working together to amplify their impact and scale their results, just as we do in our Beyond XR cluster. This initiative, now bringing together more than 14 European projects, received a special mention from Anne Bajart, Deputy Head of Unit at DG CONNECT of the European Commission. In her keynote speech, she highlighted it as a great example of a community-driven initiative and expressed her hope for its continued activity and future success.

This joint achievement would not have been possible without the outstanding work of our CORTEX2 innovators, who presented their groundbreaking results; the dedication of our CORTEX2 team over the past three years, as well as the commitment of our pilots team, and the mentors in our Support Programme; and the EuroXR Association for providing such an excellent platform to bring the XR community together.

“This joint achievement would not have been possible without the outstanding work of our innovators, the dedication of our team, and the collaboration fostered through EuroXR.”

Together, we have shown the strength of European collaboration in shaping the future of XR!

CORTEX2 team at EuroXR 2025.
The CORTEX2 team at EuroXR 2025.

Discover more about the impact of our work, the results of our funded projects, and how we are contributing to the continued evolution of XR technologies in Europe:

Follow us on LinkedIn, Bluesky and X

Subscribe to our newsletter


CORTEX2_X_SPIRIT-

Agenda: CORTEX2 and SPIRIT's final event at EuroXR 2025

At CORTEX2, we are excited to participate in the 22nd EuroXR International Conference – EuroXR 2025, taking place on 3-5 September 2025 at the ZHAW School of Management and Law in Winterthur, Switzerland.

In a major joint effort with the SPIRIT project, CORTEX2 will co-host a special session titled: "Extended Collaborative Telepresence – How CORTEX2 and SPIRIT are Advancing Innovation in XR."

This unique track will serve as the final event for both projects, showcasing the outcomes of their cascade funding programmes, with over 50 funded initiatives represented. The session will feature research outcomes, demonstrations, and real-world applications, offering valuable insights into how European projects are shaping the future of XR.

The EuroXR 2025 conference, co-organised by the European Association for eXtended Reality (EuroXR) and ZHAW Zurich University of Applied Sciences, provides a dynamic platform for knowledge exchange and collaboration among researchers, industry professionals, and technology experts working in Virtual, Augmented, and Mixed Reality.

The conference programme will feature:

  • Keynote speeches from XR thought leaders
  • Plenary sessions on emerging trends
  • Scientific and application tracks, alongside poster and demo sessions

The CORTEX2 and SPIRIT projects will take part in the panel "Future of XR" on Friday, 5 September, along with other key European projects in the area, including Openverse, LUMINOUS, and THEIA-XR.

Register now!

This event marks a milestone in celebrating the impact of CORTEX2’s work, the achievements of its funded projects, and the continued evolution of XR technologies in Europe.

Join us in Winterthur to explore the future of XR!


Cortex2_Flyer_Agenda_Print
CORTEX2 innovators_2nd progress update_PETER

CORTEX2 innovators: PETER's 2nd progress update

Q: What has PETER achieved now that the CORTEX2 Support Programme is complete?

A: We were able to achieve or exceed all KPIs. At the end of the programme, we have a complete voice-to-voice pipeline that translates from English to French, German, and vice versa in three seconds, preserving urgency tone. PETER works as a stand-alone but is ready for integration in the Rainbow platform. Moreover, in addition to the contract work, we added Italian as a fourth language and prepared the pipeline to detect emotions and not just urgency. For the Cortex2 community, but also for the B2B public, PETER can unlock new scenarios focused on personal safety and security, emergency handling, and work in potentially hazardous scenarios.

Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?

A: Apart from the undeniable effect of financial support, we believe that contacts with Cortex2 people (our mentor, tech guys at Alcatel-Lucent, the project coordinator) were a continuous stimulus to keep going, overcome difficulties and exceed what we promised. Also, the contact with other open call winners has been interesting, as we discovered a real interest in PETER and its application potential.

Q: What is the status of PETER after completing the Programme? What are your next steps?

A: We can declare a final TRL 5. The pipeline is completely working from a technical point of view and validated with a satisfactory psychometric assessment. We would like to continue the activity on PETER, adding more languages and the ability to transfer emotion, balancing privacy concerns with usefulness in real situations. We may also explore how to extend PETER to less represented languages, like those in African countries or in certain areas of Eastern Europe.


Discover more about PETER and stay informed about its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_2nd progress update_VIRTEX

CORTEX2 innovators: VIRTEX's 2nd progress update

Q: What has VIRTEX achieved now that the CORTEX2 Support Programme is complete?

A: Throughout the CORTEX2 Support Programme, VIRTEX has made significant progress in developing its no-code XR authoring platform. Key milestones include the implementation of real-time collaboration features (chat, communication bubbles, scene synchronisation), integration of a Cortex Virtual Assistant for scenario creation, and an IoT interface for context-aware simulations. Pilot testing at Ludwig Maximilians University validated the platform’s usability, while feedback has informed iterative improvements. We’ve also advanced dissemination through EuroXR 2025 and deepened collaboration with other CORTEX2 projects. An initial exploitation plan is in place, with defined IP terms and licensing options, paving the way for market entry.

Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?

A: The mentorship and technical guidance have helped us refine key components like real-time collaboration, scenario logic, and user management. The access to a multidisciplinary network, ranging from fellow innovators to XR experts, fostered meaningful collaborations, particularly in gesture recognition and avatar interaction. Finally, the structured feedback from pilot testing and the business training sessions supported our exploitation planning, helping us shape a realistic path to market.

Q: What is the status of VIRTEX after completing the Programme? What are your next steps?

A: VIRTEX is now a functional prototype validated through academic pilot testing. Based on positive pilot feedback, we are now focusing on what the market demands. We are exploring refining the user experience, expanding content libraries, and integrating AI-driven avatars. Our next steps include finalising our go-to-market strategy and securing strategic partnerships to support scale-up and commercialisation.


Discover more about VIRTEX and stay informed about its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators: FLYTEX's 3rd progress update

In Sprint 3 of the CORTEX2 Support programme, FLYTEX delivered a fully functional prototype that streams real-time IoT data into a web XR environment, proving both technical robustness and readiness for real-world deployment.

Read on to learn about FLYTEX's latest breakthroughs — and what’s next!

FLYTEX's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise FLYTEX's latest developments during Sprint 3 of the CORTEX2 programme?

A: During Sprint 3, we were able to deploy and optimise a working prototype capable of sending real-time data from multiple devices and seamlessly integrating this data into the CORTEX2 platform, showing the data in a web XR environment. This prototype not only ensured the accurate and timely transmission of sensor readings but also validated the compatibility of our system with CORTEX2’s data ingestion pipelines. By utilising robust communication protocols and ensuring adherence to platform requirements, we laid the groundwork for scalable and efficient device integration.

CORTEX2 innovators_3rd progress update_FLYTEX_Test_deployment

Q: What milestones did FLYTEX reach during Sprint 3, and what impact do they have?

A: The key milestones were finalisation of the prototype and its deployment in an agricultural environment, ensuring data integrity and real-time responsiveness across the scenario.
We succeeded to comply with the three defined KPIs (integration of at least 6 IoT device types, at least 2 IoT devices in a meeting room, IoT data upload under 45’’).

Q: What are FLYTEX's next steps?

A: We are working on a business plan to offer the developed feature as part of Flythings' services.

To end, we would like to thank our mentors and the CORTEX2 team for their support.


Check out FLYTEX's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_3rd progress update_SENSO3D

CORTEX2 innovators: SENSO3D's 3rd progress update

SENSO3D has wrapped up Sprint 3 of the CORTEX2 Support Programme with major advances in AI-driven 3D content creation, creating tools that make building immersive AR/VR spaces easier, smarter, and more accessible than ever.

Read on to learn about SENSO3D's latest breakthroughs — and what’s next!

SENSO3D's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise SENSO3D's latest developments during Sprint 3 of the CORTEX2 programme?

A: In Sprint 3, SENSO3D took a confident leap toward making 3D content creation accessible and smart. We finalised our 3D model library with over 1,000 structured assets, refined our AI tools to detect and reconstruct objects from 2D images with impressive accuracy, and developed a new prompt-based scene generation tool that can bring environments to life with just a line of text. These achievements bring us closer to our vision: intuitive and powerful tools for creating immersive AR/VR spaces.

https://www.youtube.com/watch?v=scb8PTrogL8

Q: What milestones did SENSO3D reach during Sprint 3, and what impact do they have?

A: Some of our most exciting milestones included finalising the 3D Unity-ready model library, launching a working prototype of our image-based 3D search engine, and successfully testing our prompt-based scene creation tool with real users. We have completed the integration of our models into Unity and WebXR, making our tools ready for use in real-world virtual environments. These steps significantly lower the barriers for developers, designers, and educators to build rich XR experiences, opening the door to creative, interactive, and highly customizable virtual spaces.

Q: What are SENSO3D's next steps?

A: Next, we’re focusing on fine-tuning our tools based on user feedback, expanding our scene customisation features, and preparing our system for broader deployment within the CORTEX2 ecosystem. We’re also excited to collaborate more closely with other pilot teams, making sure our tools are easy to adopt and integrate.

We’d like to thank the CORTEX2 mentors and community for their valuable feedback and encouragement. Their support has helped us sharpen our ideas and keep pushing forward. With the finish line in sight, we’re more excited than ever to share what’s coming next.

https://www.youtube.com/watch?v=HXM7Pb6M8WA

 


Check out SENSO3D's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_3rd progress update_VISOR_dashboard

CORTEX2 innovators: VISOR's 3rd progress update

VISOR wraps up Sprint 3 of the CORTEX2 Support Programme, reaching a major milestone — delivering a powerful 3D reconstruction service that transforms small objects into high-quality digital twins.

validated, user-tested gesture recognition system. Enhanced accuracy, flexibility, and responsiveness now pave the way for broader adoption and integration.

Read on to learn about VISOR's latest breakthroughs — and what’s next!

VISOR's progress during Sprint 3 of the CORTEX2 Programme

Q: How would you summarise VISOR's latest developments during Sprint 3 of the CORTEX2 programme?

A: The primary objective of the last Sprint was to validate the developed 3D reconstruction software service and assess its ability to efficiently reconstruct small objects in 3D. The service is accessible through a user-friendly web portal, where users can upload their 2D images or video sequences to generate accurate digital twins of small physical objects through geometric reconstruction and colour information extraction.

During this phase, we tweaked and finalised our 3D reconstruction module and created a comprehensive documentation of the VISOR service. We created guidelines that were published on the VISOR web portal to help users capture their small objects and maximise the quality of the 3D reconstructed output.

CORTEX2 innovators_3rd progress update_VISOR_test_plant

Q: What milestones did VISOR reach during Sprint 3, and what impact do they have?

A: In this final Sprint, our project reached the final key milestone “MS3 - Final Product”, where we have successfully delivered a 3D reconstruction service that efficiently reconstructs small physical objects into high-quality 3D models. We have validated the performance of our service and the quality of the generated 3D objects by testing it on a diverse range of objects, varying in both geometric complexity and texture detail. We have also achieved all the specified KPIs, offering a service that delivers fast reconstruction times, supports input from both images and video sequences, is validated on more than 30 small physical objects and generates a high-quality 3D textured model, ready to be shared and used by various 3D engines on Desktop, Web or VR/AR/XR environments.

Q: What are VISOR's next steps?

A: Our next step is to collaborate with other CORTEX2 Open Call winners that we connected with during an internal matchmaking event, and exploit the VISOR service into their use-cases. In parallel, we plan to implement the business plan we have prepared to further exploit VISOR commercially, as well as in future research and projects.

We would like to highlight the productive and supportive collaboration with the CORTEX2 team throughout the project. Their continuous support and guidance helped us develop an efficient 3D reconstruction service that reconstructs small physical objects into shareable high-quality 3D textured models.


Check out VISOR's previous interviews and stay updated on its progress!

Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_1st progress update_XR-CARE

CORTEX2 innovators: XR-CARE's 1st progress update

Q: What is XR-CARE in one sentence?

A: XR-CARE is a modular, multimodal anonymisation framework designed to ensure privacy in XR teleconferencing by detecting and obfuscating sensitive information in video, audio, and text data.

Q: What problem are you solving? What makes your solution unique?

A: With XR-CARE, Logimade is addressing the growing privacy risks associated with recording and sharing XR teleconferencing sessions, where sensitive personal information—such as faces, voices, and on-screen text—can be unintentionally exposed. Current anonymisation tools are either limited to single data modalities or are too slow and complex for practical use in large-scale deployments.

What makes our solution unique is its modular, multimodal, and multi-stage architecture, which allows for configurable, high-recall anonymisation of video, audio, and text streams. It operates efficiently on consumer-grade hardware, supports real-time anonymisation, and adapts to different teleconferencing contexts (e.g., XR, desktop, mobile). Additionally, our system emphasises usability and transparency, allowing users to customise parameters, track processing history, and optimise anonymisation based on their needs—something no off-the-shelf solution currently offers.

Q: What are XR-CARE’s main objectives?

A: 

  1. Develop a multimodal anonymisation framework capable of processing video, audio, and text from XR teleconferencing sessions while preserving contextual usability.
  2. Ensure high privacy protection through a multi-stage detection strategy that minimises false negatives and adapts to varied teleconference scenarios.
  3. Enable near real-time anonymisation using consumer-grade hardware, making the solution practical and scalable for widespread adoption.
  4. Provide a user-friendly web platform where users can manage projects, customize anonymization parameters, and track processing history.
  5. Integrate XR-CARE anonymisation platform with CORTEX2 framework.

CORTEX2 support programme progress

Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?

A:

  1. Development infrastructure was successfully established, including GPU-enabled environments, version control, and benchmarking tools to support reproducible research and AI-based processing.
  2. Multi-scenario datasets were collected and annotated, capturing diverse XR teleconference conditions across video, audio, and text modalities, including challenging cases such as occlusions and varied lighting.
  3. Comprehensive benchmarking of face and body detection algorithms was performed, evaluating models like YOLOv10 and MediaPipe for accuracy, speed, and robustness in realistic teleconferencing scenarios.
  4. Initial evaluations of text and voice detection and obfuscation techniques were completed, with comparative tests of models such as FAST for text and Silero VAD for speech, along with analysis of obfuscation methods including Gaussian blur, pitch shifting, and spectral modification.
  5. A modular software architecture was defined, enabling configurable, multi-stage anonymisation pipelines with support for adaptive processing and multimodal data integration.

Q: What have you achieved so far?

A: So far, XR-CARE has successfully progressed from the solution design and component evaluation (Sprint 1) to delivering an integrated, tested, and multi-modal anonymisation solution ready for deployment (Sprint 2).
During Sprint 2, three major milestones were achieved:

  1. Integration with the CORTEX2 Platform: The team developed a production-ready RESTful API and a user-friendly web interface, enabling seamless access to and control of the anonymisation pipeline. This integration lays the groundwork for scalable, real-world application of the XR-CARE system within the CORTEX2 ecosystem.
  2. Extension of Multi-Modal Anonymisation: The system was expanded to support visual anonymisation of health-related IoT sensor data embedded in video recordings, reinforcing the framework’s robustness in healthcare contexts and enhancing its capacity to process diverse XR data streams.
  3. Comprehensive Testing and Debugging: Extensive testing was carried out using real-world XR teleconference datasets to validate performance, optimise speed and accuracy, and ensure compliance with GDPR. The resulting framework demonstrated high recall, low false positive rates, and efficient processing on consumer-grade hardware.

These developments have transformed XR-CARE into a practical, multimodal anonymisation solution capable of supporting privacy-preserving XR teleconferencing. The project has increased the team’s technological readiness and positioned XR-CARE for final validation and deployment in real-world CORTEX2 use cases.

Q: How is participating in CORTEX2 supporting XR-CARE?

A: The most valuable aspect of CORTEX2 support has been the technical mentorship provided by Alireza Javanmardi, whose guidance has been critical in refining our multi-stage anonymisation strategy. His input helped us make key architectural decisions, while the teleconference recordings from Open Rainbow he shared enabled more realistic and rigorous validation of our system.

Additionally, the encouragement to submit a full article and poster to EuroXR 2025, along with financial support for conference participation, has provided an excellent opportunity for dissemination, visibility, and networking. This not only helps to promote our work but also opens doors for potential collaborations and economic exploitation of the XR-CARE platform.

Q: What are your next steps within the CORTEX2 Programme?

A: The next steps of the project XR-CARE focus on bringing the framework to ready for public release through real-world testing, refinement, and dissemination:

  1. Real-World Validation: We will conduct validation trials in real healthcare teleconferencing scenarios to assess the framework’s effectiveness. This phase will include the collection of performance metrics and user feedback to evaluate the system’s robustness, usability, and compliance with privacy standards.
  2. Final Optimisation: Based on insights gathered during validation, we will implement targeted improvements to the anonymisation pipeline. Particular attention will be given to optimising detection performance, reducing processing time, and incorporating feedback from healthcare professionals and other end users.
  3. Promotion and Dissemination: We will conduct a final review to ensure all project objectives have been fulfilled. In parallel, we will prepare promotional materials, including presentations, documentation, and online content, to support the visibility and potential adoption of XR-CARE beyond the CORTEX2 program.

Learn more about XR-CARE and stay updated on its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website: 

Open Call 1 winners  -  Open Call 2 winners


CORTEX2 innovators_1st progress update_VISIXR

CORTEX2 innovators: VISIXR's 1st progress update

Q: What is VISIXR in one sentence?

A: VISIXR aims to develop and deploy an innovative Smart Generator tool for real-time, AI-driven 3D asset modification within XR environments, aligning with the broader CORTEX2 vision of accessible and advanced immersive platforms.

Q: What problem are you solving? What makes your solution unique?

A: VISIXR provides a platform that breaks down 3D models into particular segments, analyses them and can modify them using voice or text input. At the same time, questions can be asked about the segments, which are then answered by the AI bot.

Q: What are VISIXR’s main objectives?

A: 

  1. ¡Automatic Image Segmentation (2D und 3D)
  2. Unity 3D Integration
  3. Enhanced 3D asset modification in real-time
  4. User-friendly UI

CORTEX2 support programme progress

Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?

A: During Sprint 1 of the Support Programme, the main activities involved three core tasks. First, the team created foundational components for image segmentation and real-time modification, resulting in a prototype that could independently identify and act on image regions. Second, they integrated these tools with Unity3D, enabling real-time 3D rendering for use in extended reality (XR) environments. Finally, they conducted preliminary testing and refinement to optimise the prototype's performance, stability, and resource allocation.

The key milestones achieved were the successful development of a functional prototype capable of real-time segmentation, the successful integration with Unity3D for 3D manipulation, and the establishment of a robust system foundation for future sprints. Despite challenges with real-time processing and AI integration, all tasks were completed within the revised timeframe.

Q: What have you achieved so far?

A: In Sprint 2, the project made significant progress by extending the Smart Generator to interactive 3D applications and preparing its integration into the CORTEX2 framework. A key milestone was the extension to 3D assets, which means that users can now select and modify components of 3D models using voice and text input. New camera controls and enhanced visualisation methods, such as highlighting and animated exploded views, were developed for this purpose.

At the same time, the user interface was fundamentally revised based on user tests in order to make the operation more intuitive and significantly enhance the user experience. At the same time, the technical integration into the CORTEX2 environment was planned in detail, creating a clear roadmap for the future connection. In particular, the concept of "function calls" lays the foundation for being able to control the generator dynamically and contextually using external services.

To summarise, Sprint 2 has produced a much more interactive and immersive application, and at the same time set the decisive technical course for future integration and extended functionality.

Q: How is participating in CORTEX2 supporting VISIXR?

A: The CORTEX2 project has helped us in that it is a great environment to be part of a much larger project. In this way, you come into contact with other project groups and also see what others are currently working on, what ideas they have and accordingly what problems they are trying to solve. The dialogue within the project was also a great support. This took place either in the form of meetings with our mentor or in keynote sessions where approaches to various topics proposed by project groups were presented and discussed.

Q: What are your next steps within the CORTEX2 Programme?

A: The next steps in the program are to try to complete all outstanding tasks by the end of the 3rd sprint so that the generator can then be used. These include, for example, improving real-time modification and user-friendliness.


Learn more about VISIXR and stay updated on its progress!

Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website: 

Open Call 1 winners  -  Open Call 2 winners


Subscribe to our newsletter

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement N° 101070192. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Union’s Horizon Europe research and innovation programme. Neither the European Union nor the granting authority can be held responsible for them.