We spoke with the team behind SENSO3D, one of the innovators selected in our 1st Open Call. Their project is focused on making the creation of 3D content more accessible, scalable, and efficient, particularly for extended reality (XR) applications. By combining artificial intelligence with a structured 3D object library, SENSO3D is paving the way for faster and more user-friendly virtual scene development, especially in contexts like virtual conferencing and interior design.
Let’s dive into their journey and discover the innovative work they’re developing within the CORTEX2 programme.
Q: What is SENSO3D, in one sentence?
A: SENSO3D is developing an AI-powered 3D object library that transforms 2D images into high-quality, categorised 3D models to revolutionise extended reality (XR) environments and virtual scene creation, particularly for conference room settings.
Q: What problem are you solving, and what makes your solution unique?
A: SENSO3D addresses the challenge of efficiently creating high-quality, categorised 3D models for virtual environments, such as conference rooms, without requiring extensive manual 3D design work. Traditional methods for building XR-ready 3D content are time-consuming, costly, and often demand advanced technical expertise.
Businesses and developers also face challenges such as:
- Limited access to high-quality 3D content that is ready for XR platforms
- Difficulty transforming 2D images into accurate, usable 3D models
- A lack of standardised, categorised libraries for easy model retrieval and integration
SENSO3D offers a cost-effective, scalable solution that automates the 2D-to-3D transformation process and builds an indexed 3D object library. This simplifies workflows for designers, developers, and businesses, helping democratise the creation of 3D content and transforming how virtual environments are designed and experienced.
Q: What are SENSO3D’s main objectives?
A: The key objectives of the SENSO3D project are:
Develop a 3D object library
- Create a library of 3D models representing key elements for conference rooms, such as chairs, tables, lighting, and technology.
- Aim to include around 1,000 models, categorised into specific groups for easier use.
AI tool for 2D-to-3D conversion
- Develop an AI-based tool to convert 2D images into basic 3D models.
- Use available AI techniques to improve accuracy and reduce manual work.
Categorise and organise models: Organise the 3D models into around 50 categories to help users find and use them easily.
Integrate with AR/VR platforms
- Make the models compatible with tools like Unity and the CORTEX2 platform.
- Optimise the models for better performance, such as reducing size and improving loading times.
Provide simple access and usability: Make the library and tools easy to access, including options like a user interface and basic documentation.
Demonstrate use cases: Show how the models and tools can be used in practical applications, such as virtual conference rooms, interior design or planning, and basic AR/VR environments.
Contribute to XR content development
- Support easier and more efficient creation of 3D content for AR/VR applications.
- Focus on providing resources that can be useful for developers and businesses.
These objectives focus on building a simple and functional 3D model library and tools while supporting basic AR/VR applications without overpromising on outcomes.
CORTEX2 support programme progress
Q: What have you achieved so far?
A:
3D object library development
- Collected and categorised 3D models for conference room items, including chairs, tables, and technology equipment.
- Scanned and processed over 100 real-world objects into usable 3D models.
AI tool development
- Developed and tested the initial version of the 2D-to-3D conversion tool, which can generate 3D models from images.
- Demonstrated the tool with basic functionality for detecting and creating simple 3D objects like chairs.
Dataset integration: Integrated relevant public datasets, such as ShapeNet and Matterport3D, to support AI training and improve model accuracy.
Categorisation of 3D Models: Organised models into 50 categories for easier management and retrieval.
Initial demonstrator: Created a basic demo platform to showcase the 2D-to-3D conversion tool and its output for selected objects.
Performance optimisation: Began implementing methods like polygon count reduction and texture optimisation to prepare the models for AR/VR environments.
The impact we have achieved so far
- Provided a foundation for 3D model standardisation and integration into AR/VR platforms.
- Created a starting point for automating 2D image to 3D model workflows.
- Demonstrated initial use cases, such as simple virtual conference room elements.
These milestones represent steady progress toward building a functional 3D object library and AI tools, aligning with the project goals.
Q: How is participating in CORTEX2 supporting SENSO3D?
A:
Participating in the SENSO3D has provided significant value to our project in the following ways:
Access to mentorship and guidance: The mentorship from CORTEX2 has helped us define the essential “must-have” elements for SENSO3D, such as key items like chairs, tables, and technology equipment for the 3D object library. Regular feedback and support from mentors have allowed us to refine our approach and align our outputs with project goals.
Resource identification and partnerships: With guidance from the programme, we identified suitable sources and partners to acquire modifiable 3D parts and integrate them into our library. This includes online resources and scanning partnerships
Validation of the AI tool: The programme provided a framework to test and demonstrate our AI-powered tool. This has helped us showcase a working prototype and validate its functionality for selected categories.
Structured project monitoring: The CORTEX2 programme’s structured timeline and monitoring processes have ensured that we stay on track with deliverables and milestones. Regular surveys and reporting have provided clarity on progress and areas that need improvement.
Networking and collaboration: The opportunity to connect with other funded projects and partners has opened avenues for potential collaborations, such as access to additional datasets or 3D scanning resources.
Support in overcoming challenges: CORTEX2 has been instrumental in helping us address project obstacles, such as resource constraints for AI training and challenges with sourcing real-world items for 3D scanning. For instance, their support in connecting with academic institutions or retailers has been valuable.
Most valuable aspects for our team: The programme’s mentorship and technical guidance, support for accessing resources, datasets, and feedback loops for validation, and structured monitoring that helps us measure progress and address challenges early on.
Participating in CORTEX2 has helped us establish a strong foundation for developing our 3D object library and AI tools, while providing the necessary resources and guidance to improve our project outcomes.
Q: What are your next steps within the programme?
A:
Expand the 3D object library
- Continue scanning and processing real-world objects, focusing on conference room components like tables, chairs, lighting, and technology.
- Incorporate additional commercially available 3D models to meet the target of 1,000 categorised models.
Enhance the AI tool
- Improve the 2D-to-3D conversion tool by expanding its detection capabilities from 4-5 categories to at least 10 key categories through additional datasets and AI model training.
- Implement optimisation techniques to increase accuracy and efficiency, such as transfer learning, to reduce resource requirements.
Develop use cases and demonstrators
- Finalise the demo platform showcasing AI-generated 3D models from 2D images.
- Create full virtual conference room scenes with categorised and optimised 3D objects for AR/VR use cases.
Optimise and integrate models
- Optimise 3D models for performance using techniques like polygon reduction, texture compression, and occlusion culling.
- Integrate the library and AI tools with Unity and the CORTEX2 platform for testing and validation.
Prepare for final validation
- Ensure deliverables and KPIs are met for Sprint 2 and Sprint 3, including model integration, AI tool deployment, and performance testing.
- Collect user feedback to identify areas for final improvements.
Learn more about SENSO3D and stay updated on its progress!
Want to explore more XR innovation? Browse all our supported projects on the CORTEX2 website:

