Open Access
Article
Article ID: 3727
PDF
by Pranshu Saxena, Aatif Jamshed, Sanjay Kumar Singh, Sandeep Saxena, Sahil Kumar Aggarwal
Metaverse 2025, 6(3);   
Abstract

Accurate and efficient brain tumor segmentation is critical for diagnosis, treatment planning, and outcome monitoring in neuro-oncology. This study presents an integrated framework that combines deep learning-based tumor segmentation with 3D spatial reconstruction and metaverse-aligned visualization. The Cellpose segmentation model, known for its shape-aware adaptability, was applied to grayscale T1-weighted MRI slices to generate binary tumor masks. These 2D masks were reconstructed into 3D surface meshes using the marching cubes algorithm, enabling the computation of clinically relevant spatial parameters including centroid, surface area, bounding box dimensions, and mesh extents. The resulting tumor models were embedded into a global coordinate system and visualized across orthogonal planes, simulating extended reality (XR) environments for immersive anatomical exploration. Quantitative evaluation using DICE, Intersection over Union (IoU), and Positive Predictive Value (PPV) validated the segmentation accuracy, with DICE scores exceeding 0.85 in selected cases. The reconstructed tumors exhibited surface areas ranging from ~45,000 to ~74,000 voxel² units and extended across more than 200 units along the Y and Z axes. Although volumetric values were not computed due to open mesh geometry, the spatial profiles provided a reliable foundation for integration into metaverse platforms. This pipeline offers a lightweight and scalable approach for bridging conventional 2D tumor imaging with immersive 3D applications, paving the way for advanced diagnostic, educational, and surgical planning tools.

show more
Open Access
Article
Article ID: 3728
PDF
by Aatif Jamshed, Pranshu Saxena, Sandeep saxena, Sahil Kumar Aggarwal
Metaverse 2025, 6(3);   
Abstract

The metaverse, as a shared virtual collective space, holds unparalleled promise for engaging 3D experiences through augmented reality (AR) and virtual reality (VR). Despite notable progress, there still exists a void in the proper visualization of intricate data and environments in real-time. This article suggests a novel approach utilizing AR/VR technologies to enhance 3D visualization in the metaverse. Through the integration of real-time processing of data, multi-layered virtual environments, and advanced rendering methods, the envisioned system increases interaction, immersion, and scalability. The computational model relies on hybrid algorithms that integrate machine learning-based object recognition and GPU-based rendering efficiency. This work introduces a new hybrid method for improving real-time 3D visualization in Metaverse through the integration of machine learning (ML)-based object identification and GPU-based rendering. The system uses the identified importance of objects to dynamically adjust the level of detail (LOD) of individual objects in the scene to optimize rendering quality and computational performance. The major system components are an object recognition module that classifies and ranks objects in real-time and a GPU rendering pipeline that dynamically scales the rendering detail according to the priority of the objects. The algorithm tries to achieve the trade-off between high visual quality and system performance by using deep learning for precise object detection and GPU parallelism for efficient rendering. Experimental outcomes illustrate that the introduced system realizes considerable enhancements in rendering speed, interaction latency, and visual quality compared to common AR/VR rendering methods. The results confirm the prospects of fusing AI and graphics to develop more effective and visually sophisticated virtual environments.

show more
Open Access
Article
Article ID: 3735
by yingxiao zhang
Metaverse 2025, 6(3);   
Abstract Humanoid robots, as core carriers of embodied intelligence, rely on their deep learning and behavior prediction capabilities to break through the bottleneck in general-task execution. Taking Unitree as a case study, this research conducts an in-depth analysis of the current technical status, challenges, and optimization paths of humanoid robots in this field. A dynamic environment perception-decision-execution closed-loop system is constructed, encompassing a multimodal perception layer, a hybrid decision-making layer, and a realtime execution layer. It is proposed that hardware iteration must be deeply coordinated with AI algorithms. In terms of model optimization, a multi-task lightweight model architecture is established, which innovatively combines dynamic environment adaptation algorithms with transfer learning mechanisms. Meanwhile, efforts are being made to develop a native multimodal industry-specific large-scale model for robots, exploring the engineering implementation plan for humanoid robot behavior prediction. Experimental verification not only tests the performance of Unitree’s humanoid robots but also identifies technical bottlenecks such as insufficient chip computing power, lack of industry-specific large-scale models, and dependence on remote control, along with targeted optimization suggestions. Finally, this study looks ahead to the development trends of humanoid robot technology, including breakthroughs in general AI models, the implementation of neuromorphic computing, and aspects of social impact and ethical reconstruction, aiming to promote the development of the humanoid robot industry and expand its applications in diverse scenarios such as industry and households.
show more