Open Access
Article
Article ID: 3105
PDF
by Samet Gürsoy
Metaverse 2025, 6(2);   
Received: 27 November 2024; Accepted: 14 February 2025; Available online: 20 March 2025;
Issue release: 30 June 2025
Abstract

This research aims at assessing how significant cybersecurity issues affect Metaverse coin price and trading volume with particular regard to the five greatest assets from 2017 till 2024. The study uses event study and impulse response analysis to examine the impact of MANA (Decentraland), SAND (The Sandbox), AXS (Axie Infinity Shards), ENJ (Enjin Coin) and GALA (Gala Games) on nine major security incidents like the Coincheck and Ronin hacks. Dependent variables include the prices and volumes of coins in the Metaverse, while prices of Bitcoin and Ethereum serve as the main independent variables to control for price activity. The results show short-lived but strong effects, depending on event intensity and platform specifics. The Coincheck hack caused a 4.9% price drop and 22% volume decline over 10 days, while BtcTurk had a smaller impact. The current paper enlarges the body of research addressing the cryptocurrency sector’s market stability with the new knowledge about the risks control and investments into the fresh classes of assets.

show more
Open Access
Article
Article ID: 3146
PDF
by Zhihao Yang, Weilong Peng, Meie Fang
Metaverse 2025, 6(2);   
Received: 9 December 2024; Accepted: 14 March 2025; Available online: 27 March 2025;
Issue release: 30 June 2025
Abstract

Reconstructing the human body from monocular video input presents significant challenges, including a limited field of view and difficulty in capturing non-rigid deformations, such as those associated with clothing and pose variations. These challenges often compromise motion editability and rendering quality. To address these issues, we propose a cloth-aware 3D Gaussian splatting approach that leverages the strengths of 2D convolutional neural networks (CNNs) and 3D Gaussian splatting for high-quality human body reconstruction from monocular video. Our method parameterizes 3D Gaussians anchored to a human template to generate posed position maps that capture pose-dependent non-rigid deformations. Additionally, we introduce Learnable Cloth Features, which are pixel-aligned with the posed position maps to address cloth-related deformations. By jointly modeling cloth and pose-dependent deformations, along with compact, optimizable linear blend skinning (LBS) weights, our approach significantly enhances the quality of monocular 3D human reconstructions. We also incorporate carefully designed regularization techniques for the Gaussians, improving the generalization capability of our model. Experimental results demonstrate that our method outperforms state-of-the-art techniques for animatable avatar reconstruction from monocular inputs, delivering superior performance in both reconstruction fidelity and rendering quality.

show more