CORTEX2 innovators: PETER's 2nd progress update
Q: What has PETER achieved now that the CORTEX2 Support Programme is complete?
A: We were able to achieve or exceed all KPIs. At the end of the programme, we have a complete voice-to-voice pipeline that translates from English to French, German, and vice versa in three seconds, preserving urgency tone. PETER works as a stand-alone but is ready for integration in the Rainbow platform. Moreover, in addition to the contract work, we added Italian as a fourth language and prepared the pipeline to detect emotions and not just urgency. For the Cortex2 community, but also for the B2B public, PETER can unlock new scenarios focused on personal safety and security, emergency handling, and work in potentially hazardous scenarios.
Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?
A: Apart from the undeniable effect of financial support, we believe that contacts with Cortex2 people (our mentor, tech guys at Alcatel-Lucent, the project coordinator) were a continuous stimulus to keep going, overcome difficulties and exceed what we promised. Also, the contact with other open call winners has been interesting, as we discovered a real interest in PETER and its application potential.
Q: What is the status of PETER after completing the Programme? What are your next steps?
A: We can declare a final TRL 5. The pipeline is completely working from a technical point of view and validated with a satisfactory psychometric assessment. We would like to continue the activity on PETER, adding more languages and the ability to transfer emotion, balancing privacy concerns with usefulness in real situations. We may also explore how to extend PETER to less represented languages, like those in African countries or in certain areas of Eastern Europe.
Discover more about PETER and stay informed about its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: VIRTEX's 2nd progress update
Q: What has VIRTEX achieved now that the CORTEX2 Support Programme is complete?
A: Throughout the CORTEX2 Support Programme, VIRTEX has made significant progress in developing its no-code XR authoring platform. Key milestones include the implementation of real-time collaboration features (chat, communication bubbles, scene synchronisation), integration of a Cortex Virtual Assistant for scenario creation, and an IoT interface for context-aware simulations. Pilot testing at Ludwig Maximilians University validated the platform’s usability, while feedback has informed iterative improvements. We’ve also advanced dissemination through EuroXR 2025 and deepened collaboration with other CORTEX2 projects. An initial exploitation plan is in place, with defined IP terms and licensing options, paving the way for market entry.
Q: What would you highlight about the CORTEX2 Support Programme? From the CORTEX2 experience, what has helped advance your solution the most?
A: The mentorship and technical guidance have helped us refine key components like real-time collaboration, scenario logic, and user management. The access to a multidisciplinary network, ranging from fellow innovators to XR experts, fostered meaningful collaborations, particularly in gesture recognition and avatar interaction. Finally, the structured feedback from pilot testing and the business training sessions supported our exploitation planning, helping us shape a realistic path to market.
Q: What is the status of VIRTEX after completing the Programme? What are your next steps?
A: VIRTEX is now a functional prototype validated through academic pilot testing. Based on positive pilot feedback, we are now focusing on what the market demands. We are exploring refining the user experience, expanding content libraries, and integrating AI-driven avatars. Our next steps include finalising our go-to-market strategy and securing strategic partnerships to support scale-up and commercialisation.
Discover more about VIRTEX and stay informed about its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: FLYTEX's 3rd progress update
In Sprint 3 of the CORTEX2 Support programme, FLYTEX delivered a fully functional prototype that streams real-time IoT data into a web XR environment, proving both technical robustness and readiness for real-world deployment.
Read on to learn about FLYTEX's latest breakthroughs — and what’s next!
FLYTEX's progress during Sprint 3 of the CORTEX2 Programme
Q: How would you summarise FLYTEX's latest developments during Sprint 3 of the CORTEX2 programme?
A: During Sprint 3, we were able to deploy and optimise a working prototype capable of sending real-time data from multiple devices and seamlessly integrating this data into the CORTEX2 platform, showing the data in a web XR environment. This prototype not only ensured the accurate and timely transmission of sensor readings but also validated the compatibility of our system with CORTEX2’s data ingestion pipelines. By utilising robust communication protocols and ensuring adherence to platform requirements, we laid the groundwork for scalable and efficient device integration.

Q: What milestones did FLYTEX reach during Sprint 3, and what impact do they have?
A: The key milestones were finalisation of the prototype and its deployment in an agricultural environment, ensuring data integrity and real-time responsiveness across the scenario.
We succeeded to comply with the three defined KPIs (integration of at least 6 IoT device types, at least 2 IoT devices in a meeting room, IoT data upload under 45’’).
Q: What are FLYTEX's next steps?
A: We are working on a business plan to offer the developed feature as part of Flythings' services.
To end, we would like to thank our mentors and the CORTEX2 team for their support.
Check out FLYTEX's previous interviews and stay updated on its progress!
Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: SENSO3D's 3rd progress update
SENSO3D has wrapped up Sprint 3 of the CORTEX2 Support Programme with major advances in AI-driven 3D content creation, creating tools that make building immersive AR/VR spaces easier, smarter, and more accessible than ever.
Read on to learn about SENSO3D's latest breakthroughs — and what’s next!
SENSO3D's progress during Sprint 3 of the CORTEX2 Programme
Q: How would you summarise SENSO3D's latest developments during Sprint 3 of the CORTEX2 programme?
A: In Sprint 3, SENSO3D took a confident leap toward making 3D content creation accessible and smart. We finalised our 3D model library with over 1,000 structured assets, refined our AI tools to detect and reconstruct objects from 2D images with impressive accuracy, and developed a new prompt-based scene generation tool that can bring environments to life with just a line of text. These achievements bring us closer to our vision: intuitive and powerful tools for creating immersive AR/VR spaces.
https://www.youtube.com/watch?v=scb8PTrogL8
Q: What milestones did SENSO3D reach during Sprint 3, and what impact do they have?
A: Some of our most exciting milestones included finalising the 3D Unity-ready model library, launching a working prototype of our image-based 3D search engine, and successfully testing our prompt-based scene creation tool with real users. We have completed the integration of our models into Unity and WebXR, making our tools ready for use in real-world virtual environments. These steps significantly lower the barriers for developers, designers, and educators to build rich XR experiences, opening the door to creative, interactive, and highly customizable virtual spaces.
Q: What are SENSO3D's next steps?
A: Next, we’re focusing on fine-tuning our tools based on user feedback, expanding our scene customisation features, and preparing our system for broader deployment within the CORTEX2 ecosystem. We’re also excited to collaborate more closely with other pilot teams, making sure our tools are easy to adopt and integrate.
We’d like to thank the CORTEX2 mentors and community for their valuable feedback and encouragement. Their support has helped us sharpen our ideas and keep pushing forward. With the finish line in sight, we’re more excited than ever to share what’s coming next.
https://www.youtube.com/watch?v=HXM7Pb6M8WA
Check out SENSO3D's previous interviews and stay updated on its progress!
Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: VISOR's 3rd progress update
VISOR wraps up Sprint 3 of the CORTEX2 Support Programme, reaching a major milestone — delivering a powerful 3D reconstruction service that transforms small objects into high-quality digital twins.
validated, user-tested gesture recognition system. Enhanced accuracy, flexibility, and responsiveness now pave the way for broader adoption and integration.
Read on to learn about VISOR's latest breakthroughs — and what’s next!
VISOR's progress during Sprint 3 of the CORTEX2 Programme
Q: How would you summarise VISOR's latest developments during Sprint 3 of the CORTEX2 programme?
A: The primary objective of the last Sprint was to validate the developed 3D reconstruction software service and assess its ability to efficiently reconstruct small objects in 3D. The service is accessible through a user-friendly web portal, where users can upload their 2D images or video sequences to generate accurate digital twins of small physical objects through geometric reconstruction and colour information extraction.
During this phase, we tweaked and finalised our 3D reconstruction module and created a comprehensive documentation of the VISOR service. We created guidelines that were published on the VISOR web portal to help users capture their small objects and maximise the quality of the 3D reconstructed output.

Q: What milestones did VISOR reach during Sprint 3, and what impact do they have?
A: In this final Sprint, our project reached the final key milestone “MS3 - Final Product”, where we have successfully delivered a 3D reconstruction service that efficiently reconstructs small physical objects into high-quality 3D models. We have validated the performance of our service and the quality of the generated 3D objects by testing it on a diverse range of objects, varying in both geometric complexity and texture detail. We have also achieved all the specified KPIs, offering a service that delivers fast reconstruction times, supports input from both images and video sequences, is validated on more than 30 small physical objects and generates a high-quality 3D textured model, ready to be shared and used by various 3D engines on Desktop, Web or VR/AR/XR environments.
Q: What are VISOR's next steps?
A: Our next step is to collaborate with other CORTEX2 Open Call winners that we connected with during an internal matchmaking event, and exploit the VISOR service into their use-cases. In parallel, we plan to implement the business plan we have prepared to further exploit VISOR commercially, as well as in future research and projects.
We would like to highlight the productive and supportive collaboration with the CORTEX2 team throughout the project. Their continuous support and guidance helped us develop an efficient 3D reconstruction service that reconstructs small physical objects into shareable high-quality 3D textured models.
Check out VISOR's previous interviews and stay updated on its progress!
Want to know more about other CORTEX2 innovators' updates? Browse all our supported teams on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: XR-CARE's 1st progress update
Q: What is XR-CARE in one sentence?
A: XR-CARE is a modular, multimodal anonymisation framework designed to ensure privacy in XR teleconferencing by detecting and obfuscating sensitive information in video, audio, and text data.
Q: What problem are you solving? What makes your solution unique?
A: With XR-CARE, Logimade is addressing the growing privacy risks associated with recording and sharing XR teleconferencing sessions, where sensitive personal information—such as faces, voices, and on-screen text—can be unintentionally exposed. Current anonymisation tools are either limited to single data modalities or are too slow and complex for practical use in large-scale deployments.
What makes our solution unique is its modular, multimodal, and multi-stage architecture, which allows for configurable, high-recall anonymisation of video, audio, and text streams. It operates efficiently on consumer-grade hardware, supports real-time anonymisation, and adapts to different teleconferencing contexts (e.g., XR, desktop, mobile). Additionally, our system emphasises usability and transparency, allowing users to customise parameters, track processing history, and optimise anonymisation based on their needs—something no off-the-shelf solution currently offers.
Q: What are XR-CARE’s main objectives?
A:
- Develop a multimodal anonymisation framework capable of processing video, audio, and text from XR teleconferencing sessions while preserving contextual usability.
- Ensure high privacy protection through a multi-stage detection strategy that minimises false negatives and adapts to varied teleconference scenarios.
- Enable near real-time anonymisation using consumer-grade hardware, making the solution practical and scalable for widespread adoption.
- Provide a user-friendly web platform where users can manage projects, customize anonymization parameters, and track processing history.
- Integrate XR-CARE anonymisation platform with CORTEX2 framework.
CORTEX2 support programme progress
Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?
A:
- Development infrastructure was successfully established, including GPU-enabled environments, version control, and benchmarking tools to support reproducible research and AI-based processing.
- Multi-scenario datasets were collected and annotated, capturing diverse XR teleconference conditions across video, audio, and text modalities, including challenging cases such as occlusions and varied lighting.
- Comprehensive benchmarking of face and body detection algorithms was performed, evaluating models like YOLOv10 and MediaPipe for accuracy, speed, and robustness in realistic teleconferencing scenarios.
- Initial evaluations of text and voice detection and obfuscation techniques were completed, with comparative tests of models such as FAST for text and Silero VAD for speech, along with analysis of obfuscation methods including Gaussian blur, pitch shifting, and spectral modification.
- A modular software architecture was defined, enabling configurable, multi-stage anonymisation pipelines with support for adaptive processing and multimodal data integration.
Q: What have you achieved so far?
A: So far, XR-CARE has successfully progressed from the solution design and component evaluation (Sprint 1) to delivering an integrated, tested, and multi-modal anonymisation solution ready for deployment (Sprint 2).
During Sprint 2, three major milestones were achieved:
- Integration with the CORTEX2 Platform: The team developed a production-ready RESTful API and a user-friendly web interface, enabling seamless access to and control of the anonymisation pipeline. This integration lays the groundwork for scalable, real-world application of the XR-CARE system within the CORTEX2 ecosystem.
- Extension of Multi-Modal Anonymisation: The system was expanded to support visual anonymisation of health-related IoT sensor data embedded in video recordings, reinforcing the framework’s robustness in healthcare contexts and enhancing its capacity to process diverse XR data streams.
- Comprehensive Testing and Debugging: Extensive testing was carried out using real-world XR teleconference datasets to validate performance, optimise speed and accuracy, and ensure compliance with GDPR. The resulting framework demonstrated high recall, low false positive rates, and efficient processing on consumer-grade hardware.
These developments have transformed XR-CARE into a practical, multimodal anonymisation solution capable of supporting privacy-preserving XR teleconferencing. The project has increased the team’s technological readiness and positioned XR-CARE for final validation and deployment in real-world CORTEX2 use cases.
Q: How is participating in CORTEX2 supporting XR-CARE?
A: The most valuable aspect of CORTEX2 support has been the technical mentorship provided by Alireza Javanmardi, whose guidance has been critical in refining our multi-stage anonymisation strategy. His input helped us make key architectural decisions, while the teleconference recordings from Open Rainbow he shared enabled more realistic and rigorous validation of our system.
Additionally, the encouragement to submit a full article and poster to EuroXR 2025, along with financial support for conference participation, has provided an excellent opportunity for dissemination, visibility, and networking. This not only helps to promote our work but also opens doors for potential collaborations and economic exploitation of the XR-CARE platform.
Q: What are your next steps within the CORTEX2 Programme?
A: The next steps of the project XR-CARE focus on bringing the framework to ready for public release through real-world testing, refinement, and dissemination:
- Real-World Validation: We will conduct validation trials in real healthcare teleconferencing scenarios to assess the framework’s effectiveness. This phase will include the collection of performance metrics and user feedback to evaluate the system’s robustness, usability, and compliance with privacy standards.
- Final Optimisation: Based on insights gathered during validation, we will implement targeted improvements to the anonymisation pipeline. Particular attention will be given to optimising detection performance, reducing processing time, and incorporating feedback from healthcare professionals and other end users.
- Promotion and Dissemination: We will conduct a final review to ensure all project objectives have been fulfilled. In parallel, we will prepare promotional materials, including presentations, documentation, and online content, to support the visibility and potential adoption of XR-CARE beyond the CORTEX2 program.
Learn more about XR-CARE and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: VISIXR's 1st progress update
Q: What is VISIXR in one sentence?
A: VISIXR aims to develop and deploy an innovative Smart Generator tool for real-time, AI-driven 3D asset modification within XR environments, aligning with the broader CORTEX2 vision of accessible and advanced immersive platforms.
Q: What problem are you solving? What makes your solution unique?
A: VISIXR provides a platform that breaks down 3D models into particular segments, analyses them and can modify them using voice or text input. At the same time, questions can be asked about the segments, which are then answered by the AI bot.
Q: What are VISIXR’s main objectives?
A:
- ¡Automatic Image Segmentation (2D und 3D)
- Unity 3D Integration
- Enhanced 3D asset modification in real-time
- User-friendly UI
CORTEX2 support programme progress
Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?
A: During Sprint 1 of the Support Programme, the main activities involved three core tasks. First, the team created foundational components for image segmentation and real-time modification, resulting in a prototype that could independently identify and act on image regions. Second, they integrated these tools with Unity3D, enabling real-time 3D rendering for use in extended reality (XR) environments. Finally, they conducted preliminary testing and refinement to optimise the prototype's performance, stability, and resource allocation.
The key milestones achieved were the successful development of a functional prototype capable of real-time segmentation, the successful integration with Unity3D for 3D manipulation, and the establishment of a robust system foundation for future sprints. Despite challenges with real-time processing and AI integration, all tasks were completed within the revised timeframe.
Q: What have you achieved so far?
A: In Sprint 2, the project made significant progress by extending the Smart Generator to interactive 3D applications and preparing its integration into the CORTEX2 framework. A key milestone was the extension to 3D assets, which means that users can now select and modify components of 3D models using voice and text input. New camera controls and enhanced visualisation methods, such as highlighting and animated exploded views, were developed for this purpose.
At the same time, the user interface was fundamentally revised based on user tests in order to make the operation more intuitive and significantly enhance the user experience. At the same time, the technical integration into the CORTEX2 environment was planned in detail, creating a clear roadmap for the future connection. In particular, the concept of "function calls" lays the foundation for being able to control the generator dynamically and contextually using external services.
To summarise, Sprint 2 has produced a much more interactive and immersive application, and at the same time set the decisive technical course for future integration and extended functionality.
Q: How is participating in CORTEX2 supporting VISIXR?
A: The CORTEX2 project has helped us in that it is a great environment to be part of a much larger project. In this way, you come into contact with other project groups and also see what others are currently working on, what ideas they have and accordingly what problems they are trying to solve. The dialogue within the project was also a great support. This took place either in the form of meetings with our mentor or in keynote sessions where approaches to various topics proposed by project groups were presented and discussed.
Q: What are your next steps within the CORTEX2 Programme?
A: The next steps in the program are to try to complete all outstanding tasks by the end of the 3rd sprint so that the generator can then be used. These include, for example, improving real-time modification and user-friendliness.
Learn more about VISIXR and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: VIRTEX's 1st progress update
Q: What is VIRTEX in one sentence?
A: VIRTEX is a no-code XR training platform that empowers non-technical users to create and adapt immersive, multi-scenario simulations focused on decision-making in various industries.
Q: What problem are you solving? What makes your solution unique?
A: Traditional XR training solutions are costly, technically complex, and hard to adapt without specialised developers — making them inaccessible to many organisations and industries. VIRTEX addresses this by removing the technical barriers to XR content creation, enabling trainers, educators, and domain experts to build and adapt immersive simulations themselves through a no-code platform. This empowers sectors like healthcare, industry, education, and public services to scale decision-based training faster, cheaper, and more effectively.
What makes our solution unique is:
- Truly no-code & user-friendly: VIRTEX empowers trainers, educators, and subject matter experts to build and customise immersive simulations without writing a single line of code.
- Cross-industry flexibility: The platform supports multi-scenario simulations across diverse sectors — from healthcare and manufacturing to customer service and education.
- Focus on decision-making: Scenarios are centred on realistic, branching decision points that reflect complex, real-world situations — not just technical skills.
- Adaptable & modular: Content can be reused, localised, and adapted to different roles, contexts, or training levels — reducing both cost and time to deploy.
- Rapid prototyping & deployment: Organisations can create or update training simulations in hours or days, supporting agility in workforce development.
https://www.youtube.com/watch?v=dLDDik3VMX0
Q: What are VIRTEX’s main objectives?
A:
- Develop a No-Code XR Editor: Create an intuitive, drag-and-drop platform that enables non-technical users to design and adapt immersive, interactive training scenarios without coding skills.
- Promote Cross-Industry Usability: Design the platform to be flexible and applicable across various sectors such as healthcare, manufacturing, education, customer service, and public services.
- Facilitate Rapid Adaptation: Allow users to quickly modify or build new scenarios to keep pace with evolving training needs, regulations, or organisational changes.
- Ensure Accessibility and Inclusivity: Make XR training accessible to users with varying technical backgrounds, languages, and learning styles to maximise impact.
CORTEX2 support programme progress
Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?
A:
Main Activities Implemented
- MetaMedicsVR and LMU collaboratively developed a detailed project specification and test plan, outlining technical requirements, integration points with CORTEX2 SDKs, and testing methodologies.
- MetaMedicsVR team underwent training sessions to gain proficiency with CORTEX2 SDKs and APIs, exploring their capabilities relevant for virtual environment editing and integration.
- Development of the core functionalities of the virtual environment editor, including drag-and-drop UI design, backend architecture, and initial integration with CORTEX2 services.
Milestones Achieved
- Completion of detailed Specification & Test Plan.
- Proficiency gained with CORTEX2 SDKs and APIs.
- Delivery of a prototype of the Virtual Environment Editor with basic service integration capabilities.
Q: What have you achieved so far?
A:
Main Activities Implemented
- Integration of at least five CORTEX2 services, such as scene synchronisation, Rainbow core, Cortex DB, Rainbow mediation gateway, gesture interaction, and security protocols, into the editor.
- Conducted rigorous compatibility testing to ensure the editor works seamlessly across multiple browsers and VR headsets.
- Performance tuning and backend infrastructure optimisation to support scalability and robust operation under load.
- Usability tests led by LMU gathered user feedback to refine and improve the UI for better intuitiveness and accessibility.
- Creation of comprehensive user guides in multiple languages and a demo video showcasing VIRTEX capabilities and usage.
Milestones Achieved
- Successful integration of at least 5 CORTEX2 services.
- Completion of compatibility testing across at least three platforms.
- Backend optimisation for scalability finalised.
- User interface refined based on usability testing feedback.
- User documentation and demo video created
Q: How is participating in CORTEX2 supporting VIRTEX?
A: Participating in the CORTEX2 Support Programme has significantly accelerated the development and deployment of VIRTEX. Through access to advanced XR technologies and the CORTEX2 SDKs, we have integrated cutting-edge features such as real-time scene synchronisation and gesture interaction, which would have been challenging and costly to develop independently. The expert mentoring and technical support provided by the consortium have helped us address complex challenges efficiently, enhancing the platform’s quality and robustness. Additionally, the program has facilitated valuable networking opportunities, enabling collaboration with other innovators and expanding our project’s impact. Being part of this prestigious EU initiative has also increased our visibility and credibility, which is crucial for engaging stakeholders and potential customers. Furthermore, access to diverse user groups through CORTEX2 has allowed us to gather essential feedback, ensuring VIRTEX meets real-world needs across various industries. Overall, CORTEX2 has been a catalyst that boosts our technical progress, market readiness, and ecosystem integration.
Q: What are your next steps within the CORTEX2 Programme?
A: Our next steps within the CORTEX2 program focus on finishing comprehensive testing of the VIRTEX platform to ensure stability and usability across target environments. We will document all findings and learnings gathered during development and user trials to guide future improvements. Concurrently, we will develop strategies to scale VIRTEX effectively, including expanding industry applications, enhancing platform robustness, and exploring partnerships to broaden adoption. These actions aim to maximise VIRTEX’s impact and readiness for real-world deployment.
Learn more about VIRTEX and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: VEM's 1st progress update
Q: What is VEM in one sentence?
A: Our primary long-term objective is to deliver an easily implementable and accessible tool that will assist developers in integrating end-to-end encryption (E2E) into VR projects.
Q: What problem are you solving? What makes your solution unique?
A: We address the lack of VR-specific, privacy-by-design solutions for secure communication in professional and research environments. Our SDK uniquely integrates end-to-end encryption directly into VR platforms (Unity3D), ensuring maximum data privacy and ease of implementation.
Q: What are VEM’s main objectives?
A: The main goal is to enable secure, end-to-end encrypted communication within XR applications. We focus on delivering a developer-friendly SDK and API that substantially increases privacy on VR.
CORTEX2 support programme progress
Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?
A: During Sprint 1, we built the core architecture and ensured system stability. This foundational work minimised future risks and set the stage for advanced features in later phases. We also developed smart key distribution.
Q: What have you achieved so far?
A: In Sprint 2, we implemented PrivMX technology (e2e libraries) in Unity3D and next integrated with the Rainbow platform, enhancing VR communication privacy significantly.
Q: How is participating in CORTEX2 supporting VEM?
A: CORTEX2 gave us access to experts genuinely interested in applying our technology, which led to the development of a realistic and relevant use case. We also received valuable support in integrating our solution into a broader system, allowing us to validate its interoperability. Additionally, the opportunity to explore other breakthrough technologies within the program has inspired new ideas and accelerated the professionalisation of our approach to VR.
Q: What are your next steps within the CORTEX2 Programme?
A: We plan to continue testing and validation while preparing for broader integration and deployment. Our goal is to collaborate with developers and stakeholders to refine the SDK and expand its adoption in real-world VR applications.
Learn more about VEM and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:
Open Call 1 winners - Open Call 2 winners
CORTEX2 innovators: SAME-XR's 1st progress update
Q: What is SAME-XR in one sentence?
A: A VCS-Integrated Asset Management Pipeline to Bridge the Gap Between 3D Artists and XR Developers
Q: What problem are you solving? What makes your solution unique?
A: Recent industry surveys reveal that most studios spend more than an hour per week on basic asset management tasks like searching for the latest files (53%) and transforming formats (62%). The same data reveals a clear demand for specific solutions, with 77% of studios rating version control as a very or extremely important feature in an asset management system. This insight became the cornerstone of our approach in our project SAME-XR (Scalable Asset Management and Conversion Engine for XR), a platform designed to solve these inefficiencies.
Q: What are SAME-XR’s main objectives?
A: Depending on the model complexity, less than 5-10 minutes of processing time of new models (Including conversions, preview generation, meta data, etc.).
Minimum 3-4 supported formats including .obj .gltf/.glb
Handling of at least 15 simultaneous requests.
CORTEX2 support programme progress
Q: What were the main activities implemented and milestones achieved during Sprint 1 of the CORTEX2 Support Programme?
A: The sprint focused on implementing the core features of the SAME-XR platform, covering four key tasks: (1) backend API development, (2) file operations services, (3) web interface creation, and (4) additional interface development. The system is designed as a microservices-based, event-driven, cloud-native SaaS application, leveraging technologies such as PostgreSQL, AWS S3, AWS SNS/SQS, and Keycloak for authentication and storage.
A proof-of-concept (PoC) version of the system was successfully deployed on an AWS EC2 instance for testing and validation. Key performance indicators (KPIs) were met, demonstrating high efficiency in model processing, broad format compatibility, and scalable request handling. No deviations were encountered from the planned development roadmap
Q: What have you achieved so far?
A: Sprint 2 involved a strategic pivot from the initial goal of direct CORTEX2 integration towards developing standalone tools, API clients, and integration with version control systems (VCS), specifically GitLab, to better align with established developer workflows.
Development focused on creating the foundational “Projects microservice” to manage assets, issues, and VCS integration; functional prototypes of a Blender add-on and a Unity package for direct user interaction; a suite of supporting API clients (Python, C#, JavaScript, Unity); and comprehensive Helm charts for Kubernetes deployment. Key achievements include an operational API for the Projects microservice, a successful proof-of-concept demonstrating the automated GitLab issue synchronisation workflow, and functional Blender/Unity prototypes enabling core asset upload and download.
Q: How is participating in CORTEX2 supporting SAME-XR?
A: The SAME-XR project has benefited greatly from involvement in the CORTEX² program. The most beneficial element has been our incorporation into the thriving CORTEX² ecosystem, which goes beyond the necessary funding. Our strategy shift to a more reliable, developer-centric solution has been directly influenced by the valuable insights we have gained from working with top partners, interacting with a variety of use cases, and receiving focused mentoring. Being a part of this network has greatly expanded our perspective on the prospects and real-world issues within the European XR ecosystem, in addition to speeding up our technical development.
Q: What are your next steps within the CORTEX2 Programme?
A: Our immediate next steps within the CORTEX² program are centred on rigorous testing and user-centred refinement. Now that the core functionalities of the SAME-XR pipeline and its integrated add-ons are in place, our priority is to deploy the solution for hands-on testing with the CORTEX² partners. We will focus on gathering detailed qualitative and quantitative feedback on the workflow's usability, performance, and real-world impact. This crucial feedback loop will drive our final phase of development, allowing us to polish the user interfaces, resolve any remaining bottlenecks, and ensure SAME-XR delivers maximum value to the ecosystem.
Learn more about SAME-XR and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:










