Recent advances of three-dimensional reconstruction technology and its application in modern agriculture

Ting Huang, Tao Wang, Ziang Niu, Chen Yang, Zixing Wu, Zhengjun Qiu

Article ID: 3068
Vol 5, Issue 4, 2024
DOI: https://doi.org/10.54517/ama3068
Received: 13 November 2024; Accepted: 24 December 2024; Available online: 31 December 2024;
Issue release: 31 December 2024

VIEWS - 322 (Abstract) 7 (PDF)

Download PDF

Abstract

The timely acquisition of agricultural information is fundamental to smart agriculture, providing a basis for decision-making in agricultural production and ensuring protection against risks. With advancements in computer vision and machine learning, 3D reconstruction, the process of generating detailed digital models, has demonstrated substantial potential for mining and recording crucial information from objects, including geometry, structural attributes, visual appearance and other properties. This paper summarizes the applications of 3D reconstruction and measurement in the field of agricultural information acquisition based on prior research. It first reviews the 3D reconstruction and its related techniques and algorithms, then conducts a comprehensive analysis of the applications of 3D reconstruction and measurement in crop cultivation, animal husbandry, aquaculture and post-harvest products. It can be concluded that compared to traditional two-dimensional imagery, 3D reconstruction and measurement offer richer and more comprehensive information for agricultural practices, showing better performance in tasks such as organ segmentation, geometry measurement, health monitoring and simulation analysis. Future works can be launched from keeping up with the latest reconstruction technology, accelerating the 3D reconstruction, fusing multi-sensor data and combining 3D reconstruction with other information acquisition technologies.


Keywords

crop cultivation; animal husbandry; 3D phenotyping; computer vision; trait measurement; optical sensors

Full Text

1. Introduction Agriculture is indispensable to human life, serving as the primary source of nutrition and energy for the global population [1]. These products encompass a broad range of plant and animal-derived foods, which play a critical role in providing nutrients, sustaining health, and supporting human development [2]. Agriculture also has economic and social significance, contributing to global livelihoods and economies by providing income for farmers and ensuring food security [3]. However, increasing population and consumption are placing unprecedented demands on agricultural products [4]. While taking effort to improve agricultural production efficiency, it should be concerned that exhaustive and destructive agricultural expansion could causes harm to the ecological environment and human health [5]. International consensus has emerged to transform agrifood systems for realizing the 2030 Agenda for Sustainable Development by increasing efficiency, inclusiveness, resilience, and sustainability [6]. Acquiring agricultural information is of paramount importance in such transform, as it enables farmers, experts, and researchers to make informed decisions that can optimize productivity and resource management. In recent years, the rise of Smart Agriculture, which integrates advanced technologies such as the Internet of Things (IoT) [7], big data [8], and artificial intelligence (AI) [9], has further amplified the role of agricultural information [10]. These technologies allow for real-time monitoring of field environment, crop status and potential threats, offering precise and actionable insights that can significantly improve yields, reduce resource wastage, ensuring food security and keeping the long-term sustainability of agricultural systems. Moreover, acquiring timely and accurate agricultural information will be a key measure in development of smart agriculture as it helps build resilience under the context of climate change by taking adaptive farming practices [11]. With the rapid development of computer vision technologies over the past few decades, machine vision has played an increasingly important role in acquiring agricultural information [12–14]. Through advanced image processing techniques, machine vision can precisely and automatically monitor crops, livestock, and aquaculture products. Moreover, machine vision is the primary method by which robotic platforms, such as drones and autonomous vehicles, perceive the agricultural scenes, making it an indispensable technology for smart agricultural machinery [12,15]. However, traditional computer vision techniques are predominantly constrained to the acquisition and processing of 2D images. It should be aware that agricultural products have their complexity in nature, possessing intricate structures that present significant challenges for comprehensive analysis. Consequently, relying solely on 2D imagery risks omitting critical information necessary for accurate assessment from complex structures. Three-dimensional (3D) reconstruction technology enables the collection of more comprehensive agricultural information by creating detailed 3D models of crops, livestock, and farming environments. Analysis and measurement on these 3D models help capture precise spatial and structural data that goes beyond traditional 2D imagery, providing a deeper understanding of factors such as crop growth patterns and livestock behavior [16,17]. Therefore, 3D reconstruction starts to play a pivotal role in building digital twin models, i.e., virtual replicas of physical assets, allowing for real-time monitoring, simulation, and analysis of farm operations, making it easier for robots and automated systems to perform complex tasks such as precision planting, fertilizing and harvesting [9,18,19]. It is noteworthy that the rapid evolution of deep learning in recent years have injected considerable innovation and dynamism into the ongoing research and development in computer vision tasks including 3D reconstruction [20]. For instance, the progression has been evident in the shift from classical Convolutional Neural Networks (CNNs) to more advanced architectures such as Vision Transformers (ViTs), alongside other emerging models [21–23]. Furthermore, there has been a notable progress in multi-source data fusion, moving from reliance on single-source data to the integration of multimodal learning [24,25]. These technological advancements have ushered in a new era to 3D reconstruction techniques, demonstrating impressive performance in various related tasks [26]. Different from other industries, the complexity of agricultural objects themselves and the variability of the environment place great challenges to vision tasks including 3D reconstruction, requiring high robustness of reconstruction methods and algorithms. Despite all the hardships, researchers have made successful practices of introducing 3D reconstruction to acquire agricultural information after unremitting attempts and explorations. The aim of this review is to present these successful cases in recent 10 years in crop cultivation, livestock husbandry and fisheries and quality checking. Although recent reviews of 3D reconstruction in agricultural subdomains such as plant phenotyping [27], fruit production [28] and animal husbandry [29] have been made, there lacks a birds-eye view of application of 3D reconstruction in the entire agricultural domains. Moreover, this work will organize researches by agricultural subdomains rather than specific methods [27] and algorithms [30]. This work seeks to provide relevant researchers with a comprehensive understanding of the current state of 3D reconstruction applications in agriculture, expecting to inspire researchers by drawing insights from advancements across different subfields, thereby promoting further interdisciplinary innovation. This review will first introduce 3D reconstruction with its principles, processing algorithms and data types, followed by a categorization of their application, and a conclusion with future prospects, revealing how 3D reconstruction and measurement empower better perception for smart systems and enhancing productivity, resilience and sustainability of future agriculture. 2. Three-Dimensional reconstruction techniques 2.1. Three-Dimensional reconstruction 3D reconstruction technology was born with the development of computer graphics. In the 1960s, computer graphics pioneers such as Ivan Sutherland laid foundational work for 3D modeling, enabling the first digital 3D representation [31]. While the first 3D scanner took place in 1960s, the 1980s and 1990s saw significant strides in 3D scanning and photogrammetry with the advances in computational power and algorithms [32]. In recent years, machine learning has revolutionized 3D reconstruction, allowing for more accurate and complex models from limited data sources, transforming fields from medical imaging to virtual reality. 2.2. Reconstruction principles Figure 1. Category of 3D reconstruction principles utilized in acquiring agricultural information. Investigation of research papers focusing on 3D agricultural information was conducted, leading to a conclusion about the reconstruction principles utilized in experimental setups as shown in Figure 1. These principles can be categorized into active reconstruction and passive reconstruction, depending how the sensors interact with the object or the scene to capture 3D data. Active 3D reconstruction involves projecting light onto the object or scene or emitting penetrating energy (X-Ray, MRI, etc.) to actively measure distances, depths, or shapes. Meanwhile, passive 3D reconstruction only relies on ambient information, typically images from cameras. 2.2.1. Active reconstruction Structured light scanning usually utilizes a system consist of projectors and receivers. The projectors cast specific patterns, e.g., stripe, grid, or spots, onto the surface of measured objects. The receivers, usually are 1 or more cameras, then capture how the patterns distort across the surface, and algorithms are utilized to compute the object's 3D shape based on the distortions. Time-of-Flight (ToF) uses infrared light pulses or modulated light and measure the time delay between when the light is emitted and when it reflects back from the object. This time delay is then converted into a depth value for each pixel. ToF sensors typically focuses on capturing depth information for each pixel of an image, offering real-time 3D data over shorter distances, making them widely used to achieve depth information on consumer-grade products such as smartphones and UAVs. Similar to ToF sensors, Light Detection and Ranging (LiDAR) also measures the time it takes for the light to return after hitting an object to calculate depth. However, LiDAR sensors use laser pulse, making it capable to measure distance with higher frequency and longer range. LiDAR systems are usually more expensive and complex due to the need for laser emitters and precise measurement equipment. Tomography-based 3D reconstruction, such as X-ray computed tomography (CT), magnetic resonance imaging (MRI), and electrical impedance tomography (EIT), offer unique advantages for agricultural applications. These techniques allow for the non-invasive imaging of internal structures of objects, enabling the visualization of hollow areas or the distribution of different materials within crops. For instance, CT scans can reveal the internal quality of fruits, such as detecting cavities or internal defects, while MRI and EIT can be used to study the moisture content and composition of plant tissues. The potential to acquire such detailed internal information makes these methods valuable for agricultural research, particularly in crop breeding, quality control, and post-harvest monitoring [33,34]. 2.2.2. Passive reconstruction Learning from the way human eyes perceive depth, Binocular Vision (BV) systems use two cameras placed a certain distance apart and obtain depth information based on the disparity between two images captured from slightly different viewpoints. When both cameras capture images of the same scene, the corresponding points in each image appear slightly shifted relative to each other due to their different perspectives. The difference in the position of these corresponding points is referred to as disparity (or parallax). By calculating the disparity, it is possible to determine the depth of each point in the scene using triangulation. Thus, binocular vision offers a cost-effective and straightforward approach to 3D reconstruction, providing real-time depth perception without the need for complex equipment. However, binocular vision is dependent on stereo matching, leading to bad accuracy in low-texture scenes. Literally, multiview 3D reconstruction literally recover 3D structure from multiview imagery such as the image collection in Figure 2. SfM-MVS is a notable solution in photogrammetry, which consists of two parts: Structure from Motion (SfM) and Multiview Stereo (MVS). SfM identifies common feature points among images and simultaneously solves for the spatial positions of these points and the corresponding camera intrinsic and extrinsic of each photo through optimization. Then MVS algorithm calculates dense point clouds of 3D objects based on photo-consistency. Recent advances in novel view synthesis, for instance, NeRF, Instant-NGP and 3DGS, outperformed traditional MVS algorithms and have boosted the quality of multiview 3D reconstruction [35–37]. In addition to SfM-MVS framework, algorithms like Shape from Silhouette and end-to-end neural networks also have ability of outputting 3D models [38,39]. Figure 2. A plant canopy reconstructed from a multiview dataset which consist of multiple photos taken from different viewpoints [40]. With multiple image pairs, multiview-based algorithms can better resolve ambiguities in depth estimation, making them versatile for 3D reconstruction in environments with varying surface textures and complexities. Compared to binocular vision, the setback is that multiview-based algorithms require significantly more computational resources and data processing. multiview-based algorithms also require precise camera intrinsic and extrinsic to avoid errors in reconstruction. 2.2.3. Composite reconstruction solutions Modern commercial 3D sensors often combine multiple 3D reconstruction techniques to leverage their strengths and achieve better performance. For example, commercial structured light scanners typically integrate structured light technology with stereo vision to enhance depth accuracy and surface details. Similarly, several integrated depth cameras, such as OAK-D-Pro, utilize a combination of structure light and stereo vision to provide more precise and robust 3D data, making them available in dynamic or complex environments [41]. In addition, smart systems in automotive also make use of composite solutions by integrating various 3D sensing technologies, such as LiDAR, ToF, and camera-based vision, to enhance sensing ability for autonomous driving. These multi-sensor approaches ensure a more comprehensive and reliable understanding of the surroundings, contributing to improved safety and performance in diverse conditions [42]. 2.3. Processing algorithms In order to analyze 3D data and measure valuable information, 3D data processing is a necessary procedure. Commonly used 3D data processing methods include but are not limited to Filtering, meshing, resampling, shape fitting, registration, segmentation and skeletonization. Due to various limitation of the sensors, the raw point cloud data retrieved from sensors inevitably contain noise points or artifacts. Filtering helps in eliminating unwanted outliers or artifacts which distort further analysis. Statistical Outlier Removal (SOR) is a commonly used algorithms for filter outliers. Filtering of point clouds can also be achieved through clustering or segmentation methods to remove region of no interest. Meshing recovers surface representation from point clouds, voxels or implicit neural fields. Poisson Surface Reconstruction and Delaunay Triangulation are typical algorithms to reconstruct mesh from point clouds, while Marching Cube algorithm is used to extract mesh from volumetric representations. Resampling of 3D data helps in task adaptation by making a trade-off between model resolution and processing speed. Subsampling of 3D data mitigates the computational cost and storage occupancy, while upsampling can enhance details in 3D models. Registration is a basic step to obtain complete 3D model by integrating multiple frames of data collected from sensors. Registration also allows for the fusion of data from different sources and provides the basis for geometric alignment in measurement. Typically, registration is a two-stage process which can be divided into coarse registration and fine registration. Coarse registration usually approximates the relative position by key points or global features, e.g., SIFT [43], SURF [44], RANSAC [45] and PCA [46]. Fine registration further refines the alignment, minimizing the difference between corresponding models. Fine registration algorithms can hardly produce correct solution without coarse registration. Iterative Closest Point (ICP) and its variants [47] are the most widely used fine registration algorithms. Segmentation is the basis for many important tasks such as automatic fruit and branch counting. Typical segmentation algorithms include clustering, Region Growing, Supervoxel Segmentation and deep learning. Among these segmentation algorithms, semantic segmentation algorithms can simultaneously provide semantic information, which means not only segmenting objects by parts but also classifying each part. A classic clustering method in handling point clouds is Density-based spatial clustering of applications with noise (DBSCAN) [48], which is also popular in statistical analysis. Deep learning models automatically capture the spatial features of 3D structures. In recent years, deep learning-based 3D segmentation has seen significant advancements. A series of 3D segmentation algorithms used for different 3D data structures have been proposed, including but not limited to PointNet++ [49], MeshCNN [50], PVCNN [51] and PTv3 [52]. Additionally, 3D segmentation can also be performed by projecting 2D segmentation result to 3D data. Skeletonization is the key step in skeleton extraction and analysis which facilitate recording and analysis of the topological structure of 3D objects such as plant canopy. For instance, Laplacian Contraction [53] and L1-medial skeleton construction algorithm [54] can extract skeleton from point clouds. 2.4. Data types Structured 3D data is the foundation for data storage and applying machine learning algorithms in analysis. It provides the organized format needed for algorithms to process, interpret, and extract valuable insights, enabling accurate predictions and informed decision-making in various applications. Recent research on 3D reconstruction and measurement of agricultural information has explored the usage of various 3D data types, including point clouds, meshes, voxel grids, density fields and distance fields. These data structures can be divided into explicit representation and implicit representation, as shown in Figure 3. Figure 3. 3D Data types used in acquiring agricultural information, including explicit and implicit representations. Point clouds are collections of data points with positions (usually in Cartesian coordinates) and additional information like colors or surface normals. Such simplicity makes them efficient for real-time processing and mathematical computations, which explains their widespread use as the primary output format for most 3D sensors. In robotics and autonomous driving, point clouds enable real-time tasks such as navigation, mapping, and object detection. The simplicity also results in their great compatibility of deep learning, facilitating faster deep neural network inference and training and simplify data preprocessing [55]. Various 3D datasets are contributed with point clouds, playing vital role in computer vision, 3D modeling, robotics, autonomous driving, and geospatial analysis. However, the fidelity of point clouds is highly dependent on the density and accuracy. Perceive from point clouds with low density or strong noise are challenging task for machine learning, while point clouds with too high density require large memory and slow down the calculation. Therefore, trade-off between fidelity and processing speed should be considered. Unlike point clouds, meshes offer a more structured representation of surfaces, providing inherent connectivity between the points. Meshes are efficient in representing objects because it focuses on fitting the surface and defining the shape with polygons, which reduces the amount of data needed compared to a dense point cloud. However, constructing A mesh from unstructured raw data can be challenging, requiring algorithms to ensure smooth surface reconstruction sand proper connectivity. A voxel grid is a 3D array where each voxel represents the smallest unit in a 3D space, containing properties like color or density. Voxel grids are useful in applications like medical imaging (e.g., MRI and CT scans) and photorealistic volume rendering, but high-resolution grids incur significant storage occupancy and computational cost. To ensure processing efficiency, simplified structures like sparse voxel octrees are introduced. While explicit representations directly describe the object's geometry, implicit 3D representations store information about a 3D object through implicit mathematical functions. Density Fields is a typical implicit representation of 3D model. Implicit means that instead of constant values stored in the voxel, the density and color of specific spatial coordinate in the scene are inferred from implicit functions such as parametric equations. Since neural networks are capable for universal approximation of unknown mappings, implicit density fields can also be fitted by specific neural networks [56]. Such inspiration gave birth to Neural Radiance Field (NeRF), which uses implicit functions to efficiently represent and render photorealistic 3D scenes with fewer parameters [35]. Similar to Density Fields, Distance fields are mappings between spatial coordinates and distances from surfaces. Signed Distance Fields (SDF) use negative values to indicate interior positions, enabling smooth, continuous surfaces. Previous works such as DeepSDF and NeuS have utilized neural networks to learn SDF and output smooth and intact shape of 3D objects [57,58]. 3. Application of 3D reconstruction in modern agriculture In the following, this article will systematically review the specific uses of 3D reconstruction technology based on different agricultural objects. These studies not only successfully reconstructed complex objects involved in agriculture, but carried out measurements on the generated 3D models to obtain valuable data as well. Some typical successful cases conducted on crops are shown in Figure 4. Figure 4. Typical successful application cases of 3D reconstruction on crops: (a) 3D reconstruction maize canopy at 48 days after sowing [59]; (b) Representation of skeleton tree graph as a curve tree via quotient graph [60]; (c) Visualization of wheat 3D reconstruction and organ segmentation results [61]; (d) Spatial distribution of illness of grapevine trunk [62]; (e) Point cloud of field and corresponding yield estimation map [63]. 3.1. Application of 3D reconstruction on cereal crops Cereal crops provide essential energy, carbohydrates and proteins for human life and livestock production [1]. Table 1 summarizes recent research on 3D reconstruction of cereal crops and their applications, categorized as canopy reconstruction, organ segmentation, threat assessment and yield estimation. 3D crop canopy data can provide tremendous potential to analyze phenotypic traits and archive digital model for future analysis [40]. However, obtaining high-quality 3D crop canopy data is a challenging 3D reconstruction task because of the complicated structure. A pipeline that consists of SuperGlue matching network, feature key point adjustment, bundle adjustments and self-supervised RepC-MVSNet model for point cloud generation and 3D reconstruction of wheat canopy has been proposed by Liu et al. [64] with a dense reconstruction speed of 5 minutes per plant. Skeleton and morphological structure of maize plant was derived from high-precision point clouds by 3D laser scanners [65]. Arshad et al. [66] have evaluated different Neural Radiance Field (NeRF) techniques for the 3D geometry reconstruction of various plants in both indoor and outdoor environments. In the most realistic maize field scene, the models from NeRF achieve a 74.6% F1 score comparing the result from terrestrial laser scanner. To realize the measurement of organ-level crop phenotypic traits, classification and segmentation of crop organs is an essential step when handling 3D data. McCormick et al. developed a phenotyping platform that generates 3D plant meshes representing shoot architecture in sorghum and manually segmented the meshes into a shoot cylinder, leaves, and an inflorescence [67]. Then experiments have been conducted to reveal several QTLs related to organ-level traits measured from 3D data. Nevertheless, handling segmentation on huge quantities of 3D data could be time and labor intensive, and various techniques of automatic segmentation of plant organs were explored. A 3D point cloud convolutional neural network (CNN) model which outperformed PointNet with a segmentation accuracy of 93.4% was designed to segment rice ears from stalks in panicle phenotyping [68]. Another low-cost 3D-modeling method for rice plant based on deep learning, shape from silhouette, and supervoxel clustering has been proposed to segment out panicles [69]. When using 90 panicle-segmented images, the proposed method in [69] could finish 3D panicle segmentation within 6 minutes, reaching a mean accuracy of 0.95. Chang et al. have developed a method for detecting individual sorghum panicles in a 3D point cloud derived from field UAV imagery, and characterize the length and width of panicles using shape fitting [70]. MVS-Pheno platform has been used to acquire high-quality multi-view stereo dataset of various crops, and a pipeline named DeepSeg3DMaize has been developed to segment organ instances and extract organ-level phenotypes traits such as stem height, leaf size and inclination [71]. The proposed DeepSeg3DMaize pipeline has reached the means of precision, recall, and F1-score of 0.94, 0.92, 0.93 respectively in organ instance segmentation task. Table 1. Overview of application of 3D reconstruction and measurement on cereal crops. Target Application Objects Principle1 Method details Information Reference Canopy Reconstruction High quality point cloud reconstruction potted wheat plant MVS SuperPoint + SuperGlue + FKA + FBA, RepC-MVSNet depth map, dense point cloud [64]   Phenotyping parameters extraction corn plant SLS Laplacian Point Cloud Contraction, Adative sampling, skeleton calibration skeletons and morphological structure of plant [65]   Canopy geometry reconstruction maize and other plants MVS Instant-NGP, NeRFacto, TensoRF Canopy point cloud, reconstruction error, PSNR, SSIM, LPIPS [66] Organ Segmentation Organ level traits measurement sorghum plants ToF Frame registration, polygon approximation leaf size, leaf area, plant height, shoot cylinder height, leaf angle [67]   morphological indicators measurement maize plants MVS MVS-Pheno platform, DeepSeg3DMaize network Stem-leaf segmentation, leaf instance segmentation, stem height, leaf size, leaf inclination [71]   3D panicle segmentation rice panicle SLS DLP Structure Light, SE-Inception-PointConv, Panicle-3D network segmentation between rice stalks and ears [68]   3D plant segmentation rice plant MVS SegNet, shape-from-silhouette, supervoxel clustering segmentation of rice panicles [69]   3D panicle segmentation sorghum plants MVS Photogrammetry, color ratio threshold, shape fitting panicle count, geometry and volume [70] Threat assessment drought-resistant varieties identification Maize LiDAR Distance-based clustering, voxelization Plant height, plant area density, plant area index, projected leaf area [72] Water stress detection Maize ToF Multi-source image registration, Delaunay triangulation-based interpolation Spatial distribution of canopy temperature and CWSI [73] Yield Estimation Prediction of above-ground biomass corn field MVS SfM-MVS, Regression Crop surface model (crop height distribution) [74] above-ground biomass estimation corn field MVS, LiDAR SfM-MVS, OLS, RF, BP, SVM Prediction of above-ground biomass [63] high yield variety breeding rice MVS channel thresholding, OpenSFM Regression of the number of matured grains and yield [75] high yield variety breeding wheat MVS, LiDAR SfM-MVS, DSM-based point cloud fusion 3D spatial distribution of photosynthetic parameters, yield prediction [76] Yield and grain protein content prediction wheat LiDAR 2-to-2 deep learning prediction model Time-series data of yield and grain protein content [77] 1 MVS: Multiview Stereo, SLS: Structured light scanning, ToF: Time-of-Flight, LiDAR: Light Detection And Ranging. Threat assessment proactively warn risks such as pests, diseases, or environmental stress before they escalate into more severe problems, empowering ability to act timely and targeted interventions, minimize crop loss, optimize resource use, and improve overall farm productivity in precision agriculture. Terrestrial LiDAR has been employed to collect phenotypic traits of maize under drought stress and plant height, plant area index and projected leaf area were chosen as key indicators to detect drought-resistant varieties. Plant height, plant area index and projected leaf area were chosen as key indicators to detect drought-resistant varieties, and the estimated values from LiDAR data have reached the accuracy of 96%, 70%, and 92%, respectively [72]. Qiu et al. [73] have extracted maize canopy cloud with spatial distribution of temperature and crop water stress index (CWSI) from Microsoft Kinect v2 and thermal cameras, making contribution to crop water stress detection and analysis. Yield estimation is an important issue in precision agriculture, which is directly related to profit estimation and agricultural resource scheduling, and helps breeders select high-yield varieties as well. Compared to traditional remote sensing techniques, 3D field data provides elevation data of crop canopy, therefore it can effectively improve the prediction. Gilliot et al. [74] have used SenseFly® eBee UAS platform to take photos of maize fields with GNSS positions and constructed crop surface model by photogrammetric 3D reconstruction. Sampling and regression on crop surface model outperformed manual sub-plot sampling in above- ground biomass estimation with 15% higher accuracy. Zhu et al. [63] have collected multi-source point clouds by an UAV platform with 3 sensors at different resolution and generate datasets to estimate aboveground biomass by multiple machine learning models, and the best model reached R2 of 0.83 and 0.81 for fresh and dry above-ground biomass. Okamoto et al. [75] have explored relationship between reconstructed 3D points of rice field and evaluation indices of yield. Gu et al. [76] fused LiDAR point clouds and multispectral imagery of wheat field and collected 3D photosynthetic phenotype data with significant vertical distribution patterns, making estimation of the photosynthetic parameters of wheats with R2 between 0.75 and 0.84. Derived from 3D photosynthetic data, two new 3D metrics have been developed to predict yield with higher accuracy and greater robustness than tradition methods. A 2-to-2 deep learning model has been designed to predict wheat yield and grain protein content of wheat simultaneously with field LiDAR and multispectral data as input [77]. 3.2. Application of 3D reconstruction on profit crops Profit crops are another important part in cultivation as they enrich human material life and bring the majority of income for small farms [78]. Table 2 summarizes recent research on 3D reconstruction on profit crops and their applications, categorized as canopy reconstruction, skeletal analysis, organ segmentation, threat assessment and post-harvest measurement. Table 2. Overview of application of 3D reconstruction and measurement on profit crops. Target Application Objects Principle Method details Information Reference Canopy Reconstruction Occlusing canopy prediction Sugarbeet root MVS Bundle Adjustment, PF-SGD, 3D template matching Reconstructed canopy mesh [79] Orchard Scene Recosntruction strawberry orchard MVS NeRF-Ag, Environmental factor embedding Neural Density Field of Multiscale Orchard Scenes, rendered pictures from novel views [80] Orchard scene reconstruction and understanding Pepper MVS panoptic segmentation, PAg-NeRF 3D Panoptic field map [81] Table 2. (Continued). Target Application Objects Principle Method details Information Reference Orchard Scene Mapping strawberry and pepper rows MVS ORB-SLAM, Target-Aware Implicit Mapping Implicit mapping of canopy and fruits [82] 3D reflectance spectrum analysis tomato, perilla, rapeseed MVS Next best-view planning, NeREF, radiometric calibration 3D multispectral point clouds, EWT, SPAD values [83] Skeleton Analysis Skeleton extraction cherry and begonia trees SLS space colonization algorithm, DBSCAN branch identification skeleton, branch angle, branch length [84] Skeleton reasoning with occlusion oak, apple and walnut trees BV1, MVS likelihood map, Mask-RCNN segmentation tree skeleton [85] Growth Monitoring Tomato SLS Iterative non-rigid registration, hidden Markov model Skeletal Correspondences, Temporal interpolation [86] Main stalk and node detection Cotton LiDAR Laplacian contraction Main stalk length, node number, canopy graph [87] Organ segmentation 3D branch segmentation and pruning Jujube Tree ToF Laplacian-based contraction, SPGNet, DBSCAN skeleton extraction, branch length and diameter [88] Legume Segmentaion Rape SLS Plant Segmentation Transformer number of siliques, instance segmentation [89] Fruit segmentation Apple Trees LiDAR DBSCAN Clustering reflectance intensity, geometric factors [90] Yield estimation strawberry field BV, SLS VINS-RGBD, PP-LiteSeg-T, Voxblox Sematic Mapping of strawberry field, fruit count [91] Yield estimation Cotton Field MVS SfM-MVS, Super-voxel clustering, deep forest classification cotton boll count and volume [92] Pod Counting and Meausrement Peanut MVS Nerfacto/frustum PVCNN pointclouds with instance segmentation of peanut pods [93] Threat Assessment salinity stress detection Cucumber leaves MVS Photogrammetry dimension of cucumber leaves [94] seedling abnormality detection Tomato seedlings MVS shape-from-silhouette, AutoEncoder + PointNet, semi-supervised learning autoencoder features, abnormality classification [38] wilting measurement cotton plants MVS PointSegAt, Active Boundary Segmentation, Edge Erosion organ segmentation, organ size, wilting degree [95] Clubroot disease identification Oilseed Rapes MRI Marching Cubes, Regression lateral root number, root geometry, root volume [96] Disease assessment grapevine trunks CT, MRI1 multimodal machine learning, random forest, voxel classification Spatial distribution of trunk lesions and defects [62] 1 BV: Binocular Vision. CT: X-ray Computed Tomography, MRI: Magnetic Resonance Imaging. Several canopy reconstruction studies on profit crops based on multiview stereo have been carried out. Marks et al. [79] have presented an approach to precisely reconstruct sugar beet plants with occlusion in field conditions via UAV Imagery and 3D template matching, with a precision of 81.65 comparing to laser scanned 3D models. NeRF-Ag [80], a modeling strategy of implicit neural density field, has improved multi-scale 3D scene reconstruction and rendering of strawberry orchard by introducing environmental embeddings. PAg-NeRF [81] is another efficient system that can render novel-vier photo-realistic images and panoptic 3D map from sweet pepper field. Similar implicit neural mapping framework, TAIM [82], which combined MVS with SLAM-based pose initialization strategy, has achieved robust convergence in reconstructing canopy and fruits. Furthermore, researches on plant canopy reconstruction are not limited to obtaining structure, color and texture information. By introducing view-planning based adaptive data acquisition and Neural Reference Field, Xie et al. [83] have fused multispectral imaging data and 3D point clouds and have revealed the spatial distribution of canopy equivalent water thickness (EWT) and soil and plant analyzer device (SPAD) values, facilitating plant biology and genetic studies as well as crop breeding. Skeleton extraction and analysis were conducted to analyze plant canopy. Xu et. al. [84] have extracted tree skeletons from scanned 3D point clouds by an improved space colonization algorithm and have validated the accuracy of estimated branch length and angle via the measurement of skeletons. Kim et al. [85] have proposed a tree skeleton reasoning method based on multi-view RGB-D images collected from a robotic platform with average skeleton precision and recall of 0.98 and 0.59 under occlusive scenarios. Mask-RCNN has been employed to segment out and extract partial point clouds of branch instances, then the tree skeleton could be repaired in a 3D likelihood map. In order to analyze temporal plant-traits, Chebrolu et al. [86] have taken account of the non-rigidity and the temporal growth of the plan, and proposed a novel registration method by finding correspondence of skeleton points over time, which outperformed rigid transformation-based registration by obtaining mean registration error of 3 mm and a maximum error of 13 mm. Dense cotton plant point clouds were obtained by LiDAR and a method combining Laplacian contraction and minimum spanning tree has been developed to detect main stalk and nodes [87]. Research on organ segmentation of profit crop is not only for organ-level trait extraction, but for performing automated tasks such as pruning and harvesting via agricultural robots as well. Ma et al. have collected high quality point cloud of a jujube tree from RGB-D images using only 2 perspectives, then the branches have been segmented out from the trunk using the proposed SPGNet with Intersection-over-Union (IoU) of trunks and branches of 0.85 and 0.76, providing convenience for measuring branches and making pruning decisions [88]. PST [89], a transformer-powered deep learning network, has been proposed to segment complex rapeseed plants point clouds and achieved superior performance in sematic and instance segmentation of siliques, raising mean coverage from 86.58% to 89.51% in instance segmentation comparing to PointGroup. Fruit detection and segmentation is a crucial task in organ segmentation, with applications ranging from fruit counting and yield estimation to online spatial localization. Such localization provides essential spatial references for in-field harvest robots, enhancing their efficiency in navigating and picking operations. Tsoulias et al. [90] have developed a LiDAR laser scanning system to locate, count and collect radiometric and geometric features of apples with F1-score higher than 76.9% in evaluation of apple clusters. Yuan et al. [91] have developed VINS-RGBD, a system that integrate semantic segmentation module and simultaneous localization and mapping (SLAM) technology, to achieve 3D point cloud reconstruction, sematic segmentation and yield estimation of strawberry plants in field. Xiao et al. [92] have employed SfM-MVS algorithm to collect 3D point cloud of cotton bolls in situ from UAV imagery and founded that Cross-circling oblique route outperformed traditional nadir route when collecting multi-view photos, raising R2 value of cotton boll counting from 0.73 to 0.92. Then super-voxel clustering and machine learning methods have been used to segment out cotton bolls and an algorithmic process has been proposed for extracting boll quantity and volume data. Nerfacto and CNN has been utilized to count and measure peanut pods from multi-view images of the whole plant, and the precision achieved at the IoU threshold of 0.5 is around 70% in 3D pod detection [93]. Threat assessment is also an important part in profit crop cultivation. Moualeu-Ngangué et al. [94] found out an affordable early detection of salinity stress from morphological traits of 3D meshes from cucumber leaves. Autoencoders were employed to detect abnormality on large quantities of tomato seedlings with partial labeled 3D point clouds data [38]. PointSegAt deep learning network model on 3D point clouds was used to perform wilting quantification experiments on two different varieties of cotton plants [95]. Tomography based methods are helpful in detecting lesions at plant trunk and root system. Feng et al. [96] have extracted grayscale histograms and 3D root architecture parameters from MRI images and founded method of oilseed rape clubroot detection with a classification accuracy of 95.83% in the test dataset. Fernandez et al. [62] have established a multimodal 3D imaging workflow that can reconstruct grapevine trunk internal structure via MRI and CT images. Machine learning is also employed in the proposed workflow to classify degraded tissue or white rot voxels from intact tissues, with an F1-score > 90.5% for each class. 3.3. Application of 3D reconstruction on livestock With the emerging demand for animal products in both quantity and quality and the advent of large-scale livestock farming techniques, the animal husbandry is moving towards industrialization. Highly integrated breeding environment increases pressure on livestock monitoring systems. The need for high throughput monitoring of animal health, behavior, and welfare has become more critical to ensure effective management and maintain optimal production levels. In response, various computer vision-based solutions have been employed in precision livestock farming [97]. 3D reconstruction and measurement solutions have been shown to outperform 2D image-based solutions in several tasks, e.g., behavior analysis and body measurement. Table 3 summarizes application cases of 3D reconstruction on livestock from recent researches, and the representative visualization results are shown in Figure 5. Traditionally, ear tags or collars embedded with RFID chips were used to identify individual in livestock farming. However, these tags may be lost or induce stress and need extra cost for management. Zhou et al. [98] managed to train an improved PointNet++ LGG model to construct and identify individual feature fingerprints from point clouds of pig back with an accuracy of 95.26%. Table 3. Overview of application of 3D reconstruction and measurement on livestock. Target Application Objects Principles Method details Information Reference Instance Identification Individual pig identification Pig ToF PointNet++LGG Point clouds of pig back [98] Body Dimension Measurement 3D reconstruction of pig bodies Pig ToF Mask-RCNN feature point detection, noise filtering, ICP registration chest girth, and hip width [99] Automatic body size measurement Pig ToF Improved PointNet++ segmentation Body dimension and circumference of different parts [100] 3D body shape analysis Cow SLS Poisson surface reconstruction Heart girth, chest depth, wither height, hip width, backside width, ischial width [101] growth monitoring Dairy Cow SLS gradient calculation Hip distance, height, head size, body length, depth and back slope [102] Health monitoring Lameness Detection Cow ToF detectron2, IOU-based tracking, backbone classification height curve of backbone [103] Behavior analysis Chicken BV active contour model, region-scalable fitting motion parameters (displacement, speed, acceleration), behavior classification [16] Feather damage detection Chicken BV adaptive aggregation network, heterogeneous image registration 3D body point clouds with depth and thermal information, damaged parts and damage depth [104] Figure 5. Typical successful application cases of 3D reconstruction on livestock: (a) Top view of the point cloud of the pig’s back for instance identification [98]; (b) Key point extraction for animal body size measurement [102]; (c) Equipment and depth sensor used to monitoring lameness of cows [103]. Body measurements of livestock is an important task for accurate assessment of growth and production performance. 3D morphological data is more conducive to extracting measurement points or accurately segmenting the measurement parts to obtain more accurate measurement results. In addition, three-dimensional measurement data can serve as an important archive for subsequent morphological analysis. Lei et al. [99] developed a non-contact system for perceiving pig body measurements using ToF depth cameras, in which Mask-RCNN was used to detect measurement reference points, indicating relative errors for chest girth and hip width of 3.55% and 2.83%, respectively. An automatic pig body size measurement algorithm based on improved PointNet++ segmentation has been developed, which achieved good robustness and scalability in measurement of body size and circumference of different parts [100]. Through various measurement techniques, e.g., dorsal ridge line fitting, the algorithm is capable to measure pig body with non-standard postures. Le Cozler et al. [101] have designed Morpho3D, an automatic tool using laser scanner and Poisson surface reconstruction to extract 3D body mesh and morphological parameters of Holstein cows. The reproducibility and repeatability coefficients of variation for this measurement were reported to be less than 4%. Pezzuolo et al. [102] performed uncertainty analysis in shape measurement of daily cows and reported reliable metrological performance on measurement of head size, hips distance, withers to tail length, chest girth, hips, and withers height by Microsoft Kinect™ v1. 3D observation also helps researchers assess the health status of livestock from parameters of morphology and motion. Tun et al. [103] segmented cow instance using Detectron2 and constructed 3D backbone height curves from depth images to detect lameness, achieving lameness classification accuracy of 81.1%. Xiao et al. [16] proposed an automatic behavior monitoring method with detection accuracy of drinking and eating above 94.5% for caged chicken on a binocular vision system, utilizing 3D reconstruction to accurately extract the 3D contours of the chicken’s head and body. Experimental results have demonstrated that 3D contours outperform 2D contours in analysis, which facilitates monitoring the health condition of caged chickens in real-time by deriving relevant information from the motion parameters of their eating and drinking behaviors. A heterogenous image registration method has been employed to acquire 3D body point clouds with depth and thermal information from laying hens and the feather damage depth can be measured [104]. Results shown 3D body point clouds have better performance in damage detection than 2D RGB-Thermal images, achieving R2 = 0.946 and RMSE = 2.015 mm in prediction of damage depth. 3.4. Application of 3D reconstruction on aquaculture 3D reconstruction in aquaculture is a challenging task due to the underwater environment. Most 3D sensors are not designed to operate effectively in underwater scenarios. Additionally, light behaves differently in water, refracting as it passes through, which distorts images and complicates depth calculation. The scattering and absorption of light further reduce visibility and accuracy, making it difficult to obtain precise 3D reconstructions in aquatic settings compared to land-based environments. Nevertheless, there are several successful research cases in fish shape reconstruction, geometry measurement and motion tracking. Some typical visualized results of these researches are given in Figure 6, while detailed information is listed in Table 4. Figure 6. Typical successful application cases of 3D reconstruction in aquaculture: (a) Fish body with coded structured light patterns [105]; (b) Tilapia body length measurement considering distance and angle variation [106]. (c) Fish 3D trajectories in Cartesian space [107]. Table 4. Overview of application of 3D reconstruction and measurement in aquaculture. Target Objects Principles Method details Information Reference Fish shape reconstruction fish BV Deep learning-based landmark estimation, landmark alignment 3D fish landmarks, fish point cloud model [108] seabream and seabass SLS coded structure light Depth map and 3D fish model [105] fish MVS Silhouettes and key-points extraction, shape fitting animated fish model [109] Fish size measurement Bluefin tuna BV Deformable silhouette modeling and fitting, local thresholding Snout fork length [110] Micropterus salmoides BV 2-stage key point detection network, stereo matching Spatial coordinates of fish head and tail, body length [111] fish BV Mask-RCNN+Grabcut segmentation, stereo matching Fish point cloud model, length and width [112] Red finned fugu, filefish BV Multi-media size regression, YOLOv7 segmentation Body length [113] tilapia BV SAM segmentation, mass estimation model body length, body mass estimation [106] spotted knifejaw BV Regression on body area fish area, prediction of fish mass [114] Fish 3D tracking salmon BV YOLOv5 eye detection, stereo matching, trajectory analysis Spatial trajectory, speed and acceleration [115] fish BV YoloV7, DeepSORT Individual fish ID, position, distance to camera, speed [116] zebrafish MVS idTracker appearance analysis Fish spatial trajectory and speed [107] Based on paired binocular images, a solution named MoFiM that reconstruct the fish via 3D landmarks alignment has been proposed, which introduced a chirality-supervision incorporated hourglass network to increase accuracy of landmark extraction and lowered the 3D landmark reprojection error to 1.7229% [108]. Veinidis et al. [105] have introduced coded structure light with specially designed pattern to reconstruct the shape of seabream and seabass. Wu et al. [109] have developed DeepShapeKit and successfully generated smoothed 4D shapes of fish from synchronized video frames of front and bottom views with mean key-point errors less than 5 pixels. Fish size information is an important indicator for monitoring fish biomass and health status. However, since fishes are deformable during movement, how to find measuring references is a key issue for accurate fish morphology measurement. Muñoz-Benavent et al. [110] have deployed an underwater binocular vision system inside grow-out cages to sample fish length from binocular video frames, introducing an improved geometric model [117] for accurate length measurement with up to 90% of the samples bounded in a 3% error margin. Deng et al. [111] have introduced a modified 3D reconstruction algorithm for multi-media scenarios and achieved more accurate estimation of fish length with the mean relative error of 1.05% ± 3.30% from binocular images taken above water surface. SMDMS, another scheme for fish length and width estimation, is consist of stereo vision, deep learning-powered fish instance segmentation, 3D points cloud extraction and measurement [112]. Gao et al. [113] applied Snell’s law to correct the deviations of depth calculation in multi-medium scenarios and analyzed the estimation error of fish body length under different depth and imaging angle. Not only limited to fish size, Feng et. al have explored regression model from tilapia body length to body mass and found out the quadratic model have the best performance among different mass groups with the R2 > 0.91 and a mean relative error lower than 5.90% [106]. To acquire fish length from binocular images, SAM model has been utilized to fish body segmentation and feature point selection. Shi et al. [114] have leveraged fish body area measured from binocular vision to estimate fish mass. Fish motion tracking in aquaculture farming sites can provide the basis to analyze their behaviors, possibly stress levels and animal welfares. Nygård et al. [115] focused on 3D tracking of the fish eyes from binocular images therefore calculated their 3D position and motion. Saad et al. [116] proposed a novel framework combining StereoYolo and DeepSORT to achieve multiple fish identification and motion tracking. Audira et al. has established a special apparatus with mirror to simultaneously collect the top and side view of fish tank in one photo and restore the 3D position of every zebrafish based on open-source idTracker [107,118]. It can be concluded that these researches predominantly adopt stereo vision-based solutions since stereo vision leverages triangulation to obtain true size information without reference objects. Some studies have waterproofed sensors for underwater use, while others have performed measurements outside of water. Both approaches require consideration of corresponding refraction correction models. Machine learning-based techniques, such as instance segmentation and stereo matching, are widely applied in underwater 3D reconstruction and measurement. However, these studies share a common challenge, the difficulty in overcoming the effects of occlusion. This will be an issue that should be addressed in future research, particularly when observing in scenarios with dense population such as commercial fish ponds. 3.5. Application of 3D reconstruction on post-harvest products Figure 7. Typical successful application cases of 3D reconstruction on post-harvest products: (a) Fruit images, point clouds and 3D surface model for morphological measurement [119]; (b) Device for measuring the 3D electrical impedance of maize ears [120]; (c) 17-sphere particle model of buckwheat seeds and simulation test of its stacking angle [121]. Post-harvest product measurement helps maintain product quality and consistency by providing standardized reference of quality evaluation and grading. Moreover, the measured indicators are crucial for phenotyping analysis of crops. Based on 3D reconstruction, a series of researches on 3D morphological measurement of post-harvest products were carried out for product classification, evaluation, recording, grading and breeding. As shown in Figure 7, these researches can be categorized into morphological measurement, inner traits measurement and simulation analysis, while detailed information is summarized in Table 5. Table 5. Overview of application of 3D reconstruction and measurement of post-harvest products. Application Objects Principles Method details Information Reference Fruit Traits Measurement Pear ToF ICP, LCCP Stalk Removal Centroid-based perimeters [119] Carrot ToF ICP, Poisson reconstruction registration error, 3D meshes, dimensions and volume [122] Apple BV A-KAZE feature matching, PMVS Diameter, Height, shape index, volume [123] Apple SLS, BV ICP Registration Diameter, Height, Deformity index, volume [124] navel oranges BV Stereo matching, Structural Features Extraction, Attention Weights Generation reconstruction error, surface depth [125] Walnut MVS Instant Neural Density Field Color, Length, Width, Height, Surface area, Volume [126] plum, fig, date, mushroom SLS, MVS Laser scanning, photogrammetry, artificial neural network volume during shrinkage [127] Blueberry cluster MVS Photogrammetry, Mask R-CNN segmentation projection, sphere fitting berry count, volume, and maturity [128] Walnut CT Micro CT Length, Width, Height, Shell thickness, Kernel Volume, etc. [129] Inner trait measurement Corn kernel CT Micro CT, ResNet-50 classification mold origin, temporal volume change, degree of mold contamination, [130] Corn kernel CT Micro CT, CTAN Tissue size, tissue volume, cavity volume, etc. [131] Corn ear EIT1 3D EIT, RFNetEIT conductivity distribution in maize ears [120] Simulation analysis corn seed SLS Laser Scanning, automatic ball filling and optimization irregular 3D particle modeling of corn seed [132]   sorghum seeds SLS Laser Scanning, multi-sphere method, EDEM collision restitution coefficient, static friction coefficient, rolling friction coefficient [133] buckwheat seed CT multi-sphere particle modeling physical parameters, contact parameters [121] 1 EIT: Electrical Impedance Tomography. Wang et al. [119] have used a Kinet v2.0 camera and an electric table to measure morphological traits of pears from centroid-based position, and a strategy based on locally convex connected patches has been proposed to remove stalk before geometry measurement. Xie et al. [122] have also used similar device to extract carrot mesh with Poisson Surface Reconstruction and obtain morphological features of carrots, and the morphological variables obtained from 3D solid models had a MAPE below 3%. Binocular vision [123] and structured lights [124] have been used to estimate apple phenotypic parameters rapidly. However, reconstructing fruits with dense and highly repetitive surface texture, such as navel oranges, is a very challenging task for passive 3D reconstruction methods. Gao et al. [125] has introduced OrangeStereo, a novel stereo matching algorithm that enhance the performance of depth estimation of fruit surfaces with an inference time of only 33 milliseconds and the RMSE of depth prediction of 0.81 mm. In addition, implicit neural networks are proved to restore surface information, including geometry and color features, of fruit with complicated surface such as walnuts [126]. Mollazade et al. [127]. have developed a 3D laser imaging system for measuring volumetric shrinkage of multiple horticultural products during drying to monitor the drying process and comparative test with photogrammetry has been conducted to evaluate the accuracy of the proposed imaging system. 3D model of berry fruit bunches has been obtained through 3D photogrammetry, and deep learning-based 2D instance segmentation results were projected onto the model to segment, count, and estimate morphological harvestability traits of individual blueberries efficiently [128]. Experiment showed that the accuracy of determining the fruit number in a cluster is 97.3% and the linear regression for cluster maturity has a of 0.908 with a RMSE of 0.068. In addition, tomography takes a vital role in inner trait measurement. By introducing X-ray CT, Bernard et al. [129] have collected 14 traits, including traits previously require destruction to obtain, such as shell thickness, kernel volume and filling kernel/nut ratio, and their experiment have proved that 50 samples are sufficient to phenotype the fruit quality of one accession. Tomography-based 3D reconstruction unveils inner traits of post-harvest agricultural products without destructive observation. The development of internal mold contamination of maize kernels over time, including the origin and volume change, have been unveiled by Micro-CT scanned 3D models [130]. Micro-CT reconstruction have also been used to extract phenotypic traits of maize seeds such as tissue size, tissue volume, cavity volume, etc. [131]. Zheng et al. [120] have developed a module and introduced RFNetEIT framework for the absolute imaging of the 3D electrical impedance of maize ears and revealed the conductivity distribution. 3D reconstruction of post-harvest products is also conducive to promoting the design of related automated facilities. Discrete Element Method (DEM) analysis is a key tool for granular movement simulation, thus essential in design and optimization of agricultural machinery such as seed metering devices and harvesters. By modeling real seeds using 3D reconstruction, designers can achieve more realistic simulate result and make improvement on machinery performance and reliability. Yan et al. [132] have introduced 3D laser scanning system to corn seed simulation modeling, exhibiting significantly improved precision and efficiency in analytical experiment of seed metering device. Mi et al. [133] also employed 3D laser scanner to extract outlines of sorghum seeds to construct simulation model by multi-spherical particle model filling and calibrated several physical properties such as friction coefficient with high accuracy and reliability. Similarly, Li et al. [121] have obtained 3D buckwheat model by CT scanning and conducted static parameter calibration and dynamic seed metering simulation test. A 36-sphere particle model was selected and was proved to have a good balance between simulation accuracy and computational efficiency with relative errors of the coefficients lower than 0.7%. 4. Discusstion It is worthy of our reflection that, compared to other industries, the adoption and widespread application of 3D reconstruction technology within the agricultural domain remain relatively limited. In the following discussion, this study will analyze the current difficulties and possible future development trends of 3D reconstruction in the agricultural field from several perspectives. 4.1. Scale and efficiency issues A significant portion of the current research on 3D reconstruction in agriculture has been conducted under relatively ideal laboratory conditions. These experiments often focus on specific and controlled scenarios, such as reconstructing individual plants or single fruits, which may not fully reflect the complexities of real-world agricultural tasks. Unfortunately, few researches on 3D reconstruction in agriculture have taken timeliness into attention for actual agricultural tasks. Addressing this gap is essential to enhance the practical applicability of these technologies. On the other side, some of the reconstruction methods cost hours to a day to reconstruct a scene, which is not applicable for real-time usage. An important restriction for not introducing 3D reconstruction to agricultural tasks is that the advantage in perception did not overwhelm the loss in efficiency compared to 2D image-based methods. Introducing state-of-art 3D reconstruction technology or optimizing task-specific algorithms are feasible research ideas to promote the implementation of 3D reconstruction and measurement in agriculture. Such efforts would not only improve the feasibility and efficiency of 3D reconstruction but also facilitate its large-scale deployment for agricultural measurement and management tasks. 4.2. Multimodal 3D reconstruction Most researches rely on sensing data from single source, which will result in insufficient robustness when working in natural environments. Leveraging research experience in fusing multi-source sensor data from domains such as autonomous driving and remote sensing could serve as a promising path for future research in agricultural applications [24,25,62]. Additionally, the flexible integration of 3D reconstruction technology with other perception systems utilized in agricultural domains (e.g., spectral imaging) to explore the 3D spatial distribution of multiple traits is worthy for further exploration. 4.3. Inspiration from novel view synthesis Although recent revolutionary representations of 3D scene, as mentioned in Section 2.2.2, are intermediate product for solving problem of novel view synthesis, the 3D reconstruction models obtained from these innovative representations surpass traditional multi-view 3D reconstruction algorithms, which has prompted researchers to think about improving 3D reconstruction tasks in the agricultural field by introducing these methods. It is gratifying that some researchers have already put neural radiance field-based methods into actual tasks and have achieved better performance [66,81,93,126,134]. However, to the best of our knowledge, there is no publication that introduces 3D gaussian splatting, a more state-of-the-art approach to agricultural reconstruction tasks. It is foreseeable that 3DGS-driven reconstruction will be put into application in the agricultural field in the next few years. 5. Conclusion The integration of 3D reconstruction and measurement technologies in agriculture represents a transformative shift towards more efficient and sustainable farming practices. The advancements in computer vision and machine learning, particularly the development of deep learning, have significantly enhanced the ability to monitor and analyze agricultural systems. This review retrospectively organized relevant researches on agricultural application of 3D reconstruction and measurement, analyzing the equipment, platform, algorithms, data structure and processing methods and other related technologies in these researches summarized according to the application scenarios. As demonstrated in this review, these technologies facilitate precise assessments of crops, livestock, aquatic animals and post-harvest products, enabling better decision-making and resource management. A further discussion of current changes and future prospects was carried out to provide suggestions for future researches. As human strive to optimize agricultural production while minimizing environmental impact, embracing these innovative solutions for obtaining agricultural information will be essential in achieving a more sustainable agriculture for generations. Funding: The authors gratefully acknowledge the Zhejiang Province Key Research and Development Program (2022C02056) for the financial support. Conflict of interest: The authors declare no conflict of interest.

References

1. Poutanen KS, Kårlund AO, Gómez-Gallego C, et al. Grains—A major source of sustainable protein for health. Nutrition Reviews. 2022; 80(6): 1648-1663. doi: 10.1093/nutrit/nuab084

2. Godfray HCJ, Aveyard P, Garnett T, et al. Meat consumption, health, and the environment. Science. 2018; 361(6399). doi: 10.1126/science.aam5324

3. Huang J, Yang G. Understanding recent challenges and new food policy in China. Global Food Security. 2017; 12: 119-126. doi: 10.1016/j.gfs.2016.10.002

4. Foley JA, Ramankutty N, Brauman KA, et al. Solutions for a cultivated planet. Nature. 2011; 478(7369): 337-342. doi: 10.1038/nature10452

5. Tilman D, Balzer C, Hill J, et al. Global food demand and the sustainable intensification of agriculture. Proceedings of the National Academy of Sciences. 2011; 108(50): 20260-20264. doi: 10.1073/pnas.1116437108

6. FAO. The State of Food and Agriculture 2023. In FAO eBooks; 2023.

7. Elijah O, Rahman TA, Orikumhi I, et al. An Overview of Internet of Things (IoT) and Data Analytics in Agriculture: Benefits and Challenges. IEEE Internet of Things Journal. 2018; 5(5): 3758-3773. doi: 10.1109/jiot.2018.2844296

8. Saiz-Rubio V, Rovira-Más F. From Smart Farming towards Agriculture 5.0: A Review on Crop Data Management. Agronomy. 2020; 10(2): 207. doi: 10.3390/agronomy10020207

9. Nie J, Wang Y, Li Y, et al. Artificial intelligence and digital twins in sustainable agriculture and forestry: a survey. Turkish Journal of Agriculture and Forestry. 2022; 46(5): 642-661. doi: 10.55730/1300-011x.3033

10. Karunathilake EMBM, Le AT, Heo S, et al. The Path to Smart Farming: Innovations and Opportunities in Precision Agriculture. Agriculture. 2023; 13(8): 1593. doi: 10.3390/agriculture13081593

11. Fanzo J, Davis C, McLaren R, et al. The effect of climate change across food systems: Implications for nutrition outcomes. Global Food Security. 2018; 18: 12-19. doi: 10.1016/j.gfs.2018.06.001

12. Wang T, Chen B, Zhang Z, et al. Applications of machine vision in agricultural robot navigation: A review. Computers and Electronics in Agriculture. 2022; 198: 107085. doi: 10.1016/j.compag.2022.107085

13. Rehman TU, Mahmud MdS, Chang YK, et al. Current and future applications of statistical machine learning algorithms for agricultural machine vision systems. Computers and Electronics in Agriculture. 2019; 156: 585-605. doi: 10.1016/j.compag.2018.12.006

14. Yu F, Wang M, Xiao J, et al. Advancements in Utilizing Image-Analysis Technology for Crop-Yield Estimation. Remote Sensing. 2024; 16(6): 1003. doi: 10.3390/rs16061003

15. Niu H, Landivar J, Duffield N. Classification of cotton water stress using convolutional neural networks and UAV-based RGB imagery. Advances in Modern Agriculture. 2024; 5(1). doi: 10.54517/ama.v5i1.2457

16. Xiao L, Ding K, Gao Y, et al. Behavior-induced health condition monitoring of caged chickens using binocular vision. Computers and Electronics in Agriculture. 2019; 156: 254-262. doi: 10.1016/j.compag.2018.11.022

17. Jin X, Zarco-Tejada PJ, Schmidhalter U, et al. High-Throughput Estimation of Crop Traits: A Review of Ground and Aerial Phenotyping Platforms. IEEE Geoscience and Remote Sensing Magazine. 2021; 9(1): 200-231. doi: 10.1109/mgrs.2020.2998816

18. Verdouw C, Tekinerdogan B, Beulens A, et al. Digital twins in smart farming. Agricultural Systems. 2021; 189: 103046. doi: 10.1016/j.agsy.2020.103046

19. Pylianidis C, Osinga S, Athanasiadis IN. Introducing digital twins to agriculture. Computers and Electronics in Agriculture. 2021; 184: 105942. doi: 10.1016/j.compag.2020.105942

20. Lei L, Yang Q, Yang L, et al. Deep learning implementation of image segmentation in agricultural applications: a comprehensive review. Artificial Intelligence Review. 2024; 57(6). doi: 10.1007/s10462-024-10775-6

21. Zheng Y, Jiang W. Evaluation of Vision Transformers for Traffic Sign Classification. Wireless Communications and Mobile Computing. 2022; 2022: 1-14. doi: 10.1155/2022/3041117

22. Yang B, Wang X, Xing Y, et al. Modality Fusion Vision Transformer for Hyperspectral and LiDAR Data Collaborative Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2024; 17: 17052-17065. doi: 10.1109/jstars.2024.3415729

23. Tummala S, Kadry S, Bukhari SAC, et al. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Current Oncology. 2022; 29(10): 7498-7511. doi: 10.3390/curroncol29100590

24. He M, Jiang W, Gu W. TriChronoNet: Advancing electricity price prediction with Multi-module fusion. Applied Energy. 2024; 371: 123626. doi: 10.1016/j.apenergy.2024.123626

25. Lu Y, Wang W, Bai R, et al. Hyper-relational interaction modeling in multi-modal trajectory prediction for intelligent connected vehicles in smart cites. Information Fusion. 2025; 114: 102682. doi: 10.1016/j.inffus.2024.102682

26. Han XF, Laga H, Bennamoun M. Image-Based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021; 43(5): 1578-1604. doi: 10.1109/tpami.2019.2954885

27. Akhtar MS, Zafar Z, Nawaz R, et al. Unlocking plant secrets: A systematic review of 3D imaging in plant phenotyping techniques. Computers and Electronics in Agriculture. 2024; 222: 109033. doi: 10.1016/j.compag.2024.109033

28. Wang J, Xie Z, Mao P, et al. Fruit modeling and application based on 3D imaging technology: a review. Journal of Food Measurement and Characterization. 2024; 18(6): 4120-4136. doi: 10.1007/s11694-024-02480-3

29. Ma W, Qi X, Sun Y, et al. Computer Vision-Based Measurement Techniques for Livestock Body Dimension and Weight: A Review. Agriculture. 2024; 14(2): 306. doi: 10.3390/agriculture14020306

30. Lee Y. Three-Dimensional Dense Reconstruction: A Review of Algorithms and Datasets. Sensors. 2024; 24(18): 5861. doi: 10.3390/s24185861

31. Sutherland IE. Sketchpad—a man-machine graphical communication system. Seminal graphics. 1998.

32. Edl M, Mizerák M, Trojan J. 3D Laser Scanners: History and Applications. Acta Simulatio. 2018; 4(4): 1-5. doi: 10.22306/asim.v4i4.54

33. Van As H, van Duynhoven J. MRI of plants and foods. Journal of Magnetic Resonance. 2013; 229: 25-34. doi: 10.1016/j.jmr.2012.12.019

34. Ghosh T, Maity PP, Rabbi SMF, et al. Application of X-ray computed tomography in soil and plant -a review. Frontiers in Environmental Science. 2023; 11. doi: 10.3389/fenvs.2023.1216630

35. Mildenhall B, Srinivasan PP, Tancik M, et al. NeRF. Communications of the ACM. 2021; 65(1): 99-106. doi: 10.1145/3503250

36. Müller T, Evans A, Schied C, et al. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics. 2022; 41(4): 1-15. doi: 10.1145/3528223.3530127

37. Kerbl B, Kopanas G, Leimkuehler T, et al. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics. 2023; 42(4): 1-14. doi: 10.1145/3592433

38. de Villiers HAC, Otten G, Chauhan A, et al. Autoencoder-based 3D representation learning for industrial seedling abnormality detection. Computers and Electronics in Agriculture. 2023; 206: 107619. doi: 10.1016/j.compag.2023.107619

39. Shi Z, Meng Z, Xing Y, Ma Y, Wattenhofer R. 3D-RETR: End-to-end single and multi-view 3d reconstruction with transformers. Computer Vision and Pattern Recognition; 2021.

40. Paproki A, Sirault X, Berry S, et al. A novel mesh processing based technique for 3D plant analysis. BMC Plant Biology. 2012; 12(1): 63. doi: 10.1186/1471-2229-12-63

41. Heinemann M, Herzfeld J, Sliwinski M, et al. A metrological and application-related comparison of six consumer grade stereo depth cameras for the use in robotics. In: Proceedings of the 2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE); 2022.

42. Xu X, Zhang L, Yang J, et al. A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR. Remote Sensing. 2022; 14(12): 2835. doi: 10.3390/rs14122835

43. Lowe DG. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision. 2004.

44. Bay H, Ess A, Tuytelaars T, et al. Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding. 2008; 110(3): 346-359. doi: 10.1016/j.cviu.2007.09.014

45. Fischler MA, Bolles RC. Random sample consensus. Communications of the ACM. 1981; 24(6): 381-395. doi: 10.1145/358669.358692

46. Maćkiewicz A, Ratajczak W. Principal components analysis (PCA). Computers & Geosciences; 1993.

47. Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. In: Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling; 2001.

48. Ester M, Kriegel HP, Sander J, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining; 1996.

49. Qi CR, Yi L, Su H, Guibas LJ. PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st International Conference on Neural Information Processing Systems; 2017.

50. Hanocka R, Hertz A, Fish N, et al. MeshCNN. ACM Transactions on Graphics. 2019; 38(4): 1-12. doi: 10.1145/3306346.3322959

51. Liu Z, Tang H, Lin Y, Han S. Point-voxel CNN for efficient 3D deep learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems; 2019.

52. Wu X, Jiang L, Wang PS, et al. Point Transformer V3: Simpler, Faster, Stronger. In: Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2024.

53. Cao J, Tagliasacchi A, Olson M, et al. Point Cloud Skeletons via Laplacian Based Contraction. In: Proceedings of the 2010 Shape Modeling International Conference; 2010.

54. Huang H, Wu S, Cohen-Or D, et al. L 1 -medial skeleton of point cloud. ACM Transactions on Graphics. 2013; 32(4): 1-8. doi: 10.1145/2461912.2461913

55. Guo Y, Wang H, Hu Q, et al. Deep Learning for 3D Point Clouds: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021; 43(12): 4338-4364. doi: 10.1109/tpami.2020.3005434

56. Hornik K, Stinchcombe M, White H. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks; 1990.

57. Park JJ, Florence P, Straub J, et al. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019.

58. Wang P, Liu L, Liu Y, et al. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. In Advances in Neural Information Processing Systems; 2021.

59. Xiao S, Fei S, Li Q, et al. The Importance of Using Realistic 3D Canopy Models to Calculate Light Interception in the Field. Plant Phenomics. 2023; 5. doi: 10.34133/plantphenomics.0082

60. Chaudhury A, Godin C. Skeletonization of Plant Point Cloud Data Using Stochastic Optimization Framework. Frontiers in Plant Science. 2020; 11. doi: 10.3389/fpls.2020.00773

61. Li W, Wu S, Wen W, et al. Using high-throughput phenotype platform MVS-Pheno to reconstruct the 3D morphological structure of wheat. AoB PLANTS. 2024; 16(2). doi: 10.1093/aobpla/plae019

62. Fernandez R, Le Cunff L, Mérigeaud S, et al. End-to-end multimodal 3D imaging and machine learning workflow for non-destructive phenotyping of grapevine trunk internal structure. Scientific Reports. 2024; 14(1). doi: 10.1038/s41598-024-55186-3

63. Zhu W, Sun Z, Peng J, et al. Estimating Maize Above-Ground Biomass Using 3D Point Clouds of Multi-Source Unmanned Aerial Vehicle Data at Multi-Spatial Scales. Remote Sensing. 2019; 11(22): 2678. doi: 10.3390/rs11222678

64. Liu H, Xin C, Lai M, et al. RepC-MVSNet: A Reparameterized Self-Supervised 3D Reconstruction Algorithm for Wheat 3D Reconstruction. Agronomy. 2023; 13(8): 1975. doi: 10.3390/agronomy13081975

65. Wu S, Wen W, Xiao B, et al. An Accurate Skeleton Extraction Approach From 3D Point Clouds of Maize Plants. Frontiers in Plant Science. 2019; 10. doi: 10.3389/fpls.2019.00248

66. Arshad MA, Jubery T, Afful J, et al. Evaluating Neural Radiance Fields for 3D Plant Geometry Reconstruction in Field Conditions. Plant Phenomics. 2024; 6. doi: 10.34133/plantphenomics.0235

67. Mccormick RF, Truong SK, Mullet JE. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture. Plant Physiology; 2016.

68. Gong L, Du X, Zhu K, et al. Panicle-3D: Efficient Phenotyping Tool for Precise Semantic Segmentation of Rice Panicle Point Cloud. Plant Phenomics. 2021; 2021. doi: 10.34133/2021/9838929

69. Wu D, Yu L, Ye J, et al. Panicle-3D: A low-cost 3D-modeling method for rice panicles based on deep learning, shape from silhouette, and supervoxel clustering. The Crop Journal. 2022; 10(5): 1386-1398. doi: 10.1016/j.cj.2022.02.007

70. Chang A, Jung J, Yeom J, et al. 3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. Remote Sensing. 2021; 13(2): 282. doi: 10.3390/rs13020282

71. Li Y, Wen W, Miao T, et al. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning. Computers and Electronics in Agriculture. 2022; 193: 106702. doi: 10.1016/j.compag.2022.106702

72. Su Y, Wu F, Ao Z, et al. Evaluating maize phenotype dynamics under drought stress using terrestrial lidar. Plant Methods. 2019; 15(1). doi: 10.1186/s13007-019-0396-x

73. Qiu R, Miao Y, Zhang M, et al. Detection of the 3D temperature characteristics of maize under water stress using thermal and RGB-D cameras. Computers and Electronics in Agriculture. 2021; 191: 106551. doi: 10.1016/j.compag.2021.106551

74. Gilliot JM, Michelin J, Hadjard D, et al. An accurate method for predicting spatial variability of maize yield from UAV-based plant height estimation: a tool for monitoring agronomic field experiments. Precision Agriculture. 2020; 22(3): 897-921. doi: 10.1007/s11119-020-09764-w

75. Okamoto T, Kimura A, Shimono H, et al. 3D reconstruction of rice plant community using spectral images with a goal of making rice breeding efficient. In: Proceedings of the International Workshop on Advanced Imaging Technology (IWAIT) 2023; 2023.

76. Gu Y, Wang Y, Wu Y, et al. Novel 3D photosynthetic traits derived from the fusion of UAV LiDAR point cloud and multispectral imagery in wheat. Remote Sensing of Environment. 2024; 311: 114244. doi: 10.1016/j.rse.2024.114244

77. Sun Z, Li Q, Jin S, et al. Simultaneous Prediction of Wheat Yield and Grain Protein Content Using Multitask Deep Learning from Time-Series Proximal Sensing. Plant Phenomics. 2022; 2022. doi: 10.34133/2022/9757948

78. Kurdyś-Kujawska A, Strzelecka A, Zawadzka D. The Impact of Crop Diversification on the Economic Efficiency of Small Farms in Poland. Agriculture. 2021; 11(3): 250. doi: 10.3390/agriculture11030250

79. Marks E, Magistri F, Stachniss C. Precise 3D Reconstruction of Plants from UAV Imagery Combining Bundle Adjustment and Template Matching. In: Proceedings of the 2022 International Conference on Robotics and Automation (ICRA); 2022.

80. Zhang J, Wang X, Ni X, et al. Neural radiance fields for multi-scale constraint-free 3D reconstruction and rendering in orchard scenes. Computers and Electronics in Agriculture. 2024; 217: 108629. doi: 10.1016/j.compag.2024.108629

81. Smitt C, Halstead M, Zimmer P, et al. PAg-NeRF: Towards Fast and Efficient End-to-End Panoptic 3D Representations for Agricultural Robotics. IEEE Robotics and Automation Letters. 2024; 9(1): 907-914. doi: 10.1109/lra.2023.3338515

82. Kelly S, Riccardi A, Marks E, et al. Target-Aware Implicit Mapping for Agricultural Crop Inspection. In: Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023.

83. Xie P, Ma Z, Du R, et al. An unmanned ground vehicle phenotyping-based method to generate three-dimensional multispectral point clouds for deciphering spatial heterogeneity in plant traits. Molecular Plant. 2024; 17(10): 1624-1638. doi: 10.1016/j.molp.2024.09.004

84. Xu Y, Hu C, Xie Y. An improved space colonization algorithm with DBSCAN clustering for a single tree skeleton extraction. International Journal of Remote Sensing. 2022; 43(10): 3692-3713. doi: 10.1080/01431161.2022.2102950

85. Kim CH, Kantor G. Occlusion Reasoning for Skeleton Extraction of Self-Occluded Tree Canopies. In: Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023.

86. Chebrolu N, Labe T, Stachniss C. Spatio-Temporal Non-Rigid Registration of 3D Point Clouds of Plants. In: Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA); 2020.

87. Sun S, Li C, Chee PW, et al. High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection. Computers and Electronics in Agriculture. 2021; 187: 106276. doi: 10.1016/j.compag.2021.106276

88. Ma B, Du J, Wang L, et al. Automatic branch detection of jujube trees based on 3D reconstruction for dormant pruning using the deep learning-based method. Computers and Electronics in Agriculture. 2021; 190: 106484. doi: 10.1016/j.compag.2021.106484

89. Du R, Ma Z, Xie P, et al. PST: Plant segmentation transformer for 3D point clouds of rapeseed plants at the podding stage. ISPRS Journal of Photogrammetry and Remote Sensing. 2023; 195: 380-392. doi: 10.1016/j.isprsjprs.2022.11.022

90. Tsoulias N, Paraforos DS, Xanthopoulos G, et al. Apple Shape Detection Based on Geometric and Radiometric Features Using a LiDAR Laser Scanner. Remote Sensing. 2020; 12(15): 2481. doi: 10.3390/rs12152481

91. Yuan Q, Wang P, Luo W, et al. Simultaneous Localization and Mapping System for Agricultural Yield Estimation Based on Improved VINS-RGBD: A Case Study of a Strawberry Field. Agriculture. 2024; 14(5): 784. doi: 10.3390/agriculture14050784

92. Xiao S, Fei S, Ye Y, et al. 3D reconstruction and characterization of cotton bolls in situ based on UAV technology. ISPRS Journal of Photogrammetry and Remote Sensing. 2024; 209: 101-116. doi: 10.1016/j.isprsjprs.2024.01.027

93. Saeed F, Sun J, Ozias-Akins P, et al. PeanutNeRF: 3D Radiance Field for Peanuts. In: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2023.

94. Moualeu-Ngangué D, Bötzl M, Stützel H. First Form, Then Function: 3D Reconstruction of Cucumber Plants (Cucumis sativus L.) Allows Early Detection of Stress Effects through Leaf Dimensions. Remote Sensing. 2022; 14(5): 1094. doi: 10.3390/rs14051094

95. Hao H, Wu S, Li Y, et al. Automatic acquisition, analysis and wilting measurement of cotton 3D phenotype based on point cloud. Biosystems Engineering. 2024; 239: 173-189. doi: 10.1016/j.biosystemseng.2024.02.010

96. Feng L, Chen S, Wu B, et al. Detection of oilseed rape clubroot based on low-field nuclear magnetic resonance imaging. Computers and Electronics in Agriculture. 2024; 218: 108687. doi: 10.1016/j.compag.2024.108687

97. Wang Y, Mücher S, Wang W, et al. A review of three-dimensional computer vision used in precision livestock farming for cattle growth management. Computers and Electronics in Agriculture. 2023; 206: 107687. doi: 10.1016/j.compag.2023.107687

98. Zhou H, Li Q, Xie Q. Individual Pig Identification Using Back Surface Point Clouds in 3D Vision. Sensors. 2023; 23(11): 5156. doi: 10.3390/s23115156

99. Lei K, Tang X, Li X, et al. Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds. Agriculture. 2024; 14(6): 793. doi: 10.3390/agriculture14060793

100. Hao H, Jincheng Y, Ling Y, et al. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size. Computers and Electronics in Agriculture. 2023; 205: 107560. doi: 10.1016/j.compag.2022.107560

101. Le Cozler Y, Allain C, Caillot A, et al. High-precision scanning system for complete 3D cow body shape imaging and analysis of morphological traits. Computers and Electronics in Agriculture. 2019; 157: 447-453. doi: 10.1016/j.compag.2019.01.019

102. Pezzuolo A, Guarino M, Sartori L, et al. A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns. Sensors. 2018; 18(2): 673. doi: 10.3390/s18020673

103. Tun SC, Onizuka T, Tin P, et al. Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification. Journal of Imaging. 2024; 10(3): 67. doi: 10.3390/jimaging10030067

104. Zhang X, Zhang Y, Geng J, et al. Feather Damage Monitoring System Using RGB-Depth-Thermal Model for Chickens. Animals. 2022; 13(1): 126. doi: 10.3390/ani13010126

105. Veinidis C, Arnaoutoglou F, Syvridis D. 3D Reconstruction of Fishes Using Coded Structured Light. Journal of Imaging. 2023; 9(9): 189. doi: 10.3390/jimaging9090189

106. Feng G, Pan B, Chen M. Non-Contact Tilapia Mass Estimation Method Based on Underwater Binocular Vision. Applied Sciences. 2024; 14(10): 4009. doi: 10.3390/app14104009

107. Audira G, Sampurna B, Juniardi S, et al. A Simple Setup to Perform 3D Locomotion Tracking in Zebrafish by Using a Single Camera. Inventions. 2018; 3(1): 11. doi: 10.3390/inventions3010011

108. Yin J, Zhu D, Shi M, et al. MoFiM: A morphable fish modeling method for underwater binocular vision system. Computer Animation and Virtual Worlds. 2022; 33(5). doi: 10.1002/cav.2104

109. Wu R, Deussen O, Li L. DeepShapeKit: accurate 4D shape reconstruction of swimming fish. In: Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022.

110. Muñoz-Benavent P, Andreu-García G, Valiente-González JM, et al. Enhanced fish bending model for automatic tuna sizing using computer vision. Computers and Electronics in Agriculture. 2018; 150: 52-61. doi: 10.1016/j.compag.2018.04.005

111. Deng Y, Tan H, Zhou D, et al. An automatic body length estimating method for Micropterus salmoides using local water surface stereo vision. Biosystems Engineering. 2023; 235: 166-179. doi: 10.1016/j.biosystemseng.2023.09.013

112. Huang K, Li Y, Suo F, et al. Stereo Vison and Mask-RCNN Segmentation Based 3D Points Cloud Matching for Fish Dimension Measurement. In: Proceedings of the 2020 39th Chinese Control Conference (CCC); 2020.

113. Gao T, Xiong Z, Li Z, et al. Precise underwater fish measurement: A geometric approach leveraging medium regression. Computers and Electronics in Agriculture. 2024; 221: 108932. doi: 10.1016/j.compag.2024.108932

114. Shi C, Zhao R, Liu C, et al. Underwater fish mass estimation using pattern matching based on binocular system. Aquacultural Engineering. 2022; 99: 102285. doi: 10.1016/j.aquaeng.2022.102285

115. Nygård TA, Jahren JH, Schellewald C, et al. Motion trajectory estimation of salmon using stereo vision. IFAC-PapersOnLine. 2022; 55(31): 363-368. doi: 10.1016/j.ifacol.2022.10.455

116. Aya S, Stian J, Morten B, Mats M, Eleni K. StereoYolo + DeepSORT: A framework to track fish from underwater stereo camera in situ. In Proc.SPIE; 2024.

117. Atienza-Vanacloig V, Andreu-García G, López-García F, et al. Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model. Computers and Electronics in Agriculture. 2016; 130: 142-150. doi: 10.1016/j.compag.2016.10.009

118. Pérez-Escudero A, Vicente-Page J, Hinz RC, et al. idTracker: tracking individuals in a group by automatic identification of unmarked animals. Nature Methods. 2014; 11(7): 743-748. doi: 10.1038/nmeth.2994

119. Wang Y, Chen Y. Fruit Morphological Measurement Based on Three-Dimensional Reconstruction. Agronomy. 2020; 10(4): 455. doi: 10.3390/agronomy10040455

120. Zheng HY, Li Y, Wang N, et al. A novel framework for three-dimensional electrical impedance tomography reconstruction of maize ear via feature reconfiguration and residual networks. PeerJ Computer Science. 2024; 10: e1944. doi: 10.7717/peerj-cs.1944

121. Li G, Li H, Li X, et al. Establishment and Calibration of Discrete Element Model for Buckwheat Seed Based on Static and Dynamic Verification Test. Agriculture. 2023; 13(5): 1024. doi: 10.3390/agriculture13051024

122. Xie W, Wei S, Yang D. Morphological measurement for carrot based on three-dimensional reconstruction with a ToF sensor. Postharvest Biology and Technology. 2023; 197: 112216. doi: 10.1016/j.postharvbio.2022.112216

123. Ma H, Zhu X, et al. Rapid estimation of apple phenotypic parameters based on 3D reconstruction. International Journal of Agricultural and Biological Engineering. 2021; 14(5): 180-188. doi: 10.25165/j.ijabe.20211405.6258

124. Yu S, Yan X, Jia T, et al. Binocular structured light-based 3D reconstruction for morphological measurements of apples. Postharvest Biology and Technology. 2024; 213: 112952. doi: 10.1016/j.postharvbio.2024.112952

125. Gao Y, Wang Q, Rao X, et al. OrangeStereo: A navel orange stereo matching network for 3D surface reconstruction. Computers and Electronics in Agriculture. 2024; 217: 108626. doi: 10.1016/j.compag.2024.108626

126. Huang T, Bian Y, Niu Z, et al. Fast neural distance field-based three-dimensional reconstruction method for geometrical parameter extraction of walnut shell from multiview images. Computers and Electronics in Agriculture. 2024; 224: 109189. doi: 10.1016/j.compag.2024.109189

127. Mollazade K, Lucht J van der, Jörissen S, et al. 3D laser imaging for measuring volumetric shrinkage of horticultural products during drying process. Computers and Electronics in Agriculture. 2023; 207: 107749. doi: 10.1016/j.compag.2023.107749

128. Ni X, Li C, Jiang H, et al. Three-dimensional photogrammetry with deep learning instance segmentation to extract berry fruit harvestability traits. ISPRS Journal of Photogrammetry and Remote Sensing. 2021; 171: 297-309. doi: 10.1016/j.isprsjprs.2020.11.010

129. Bernard A, Hamdy S, Le Corre L, et al. 3D characterization of walnut morphological traits using X-ray computed tomography. Plant Methods. 2020; 16(1). doi: 10.1186/s13007-020-00657-7

130. Zhang Y, Hui Y, Zhou Y, et al. Characterization and Detection Classification of Moldy Corn Kernels Based on X-CT and Deep Learning. Applied Sciences. 2024; 14(5): 2166. doi: 10.3390/app14052166

131. Zhao H, Wang J, Liao S, et al. Study on the micro-phenotype of different types of maize kernels based on Micro-CT. Smart Agriculture; 2021.

132. Yan H. 3D Scanner-Based Corn Seed Modeling. Applied Engineering in Agriculture; 2016.

133. Mi G, Liu Y, Wang T, et al. Measurement of Physical Properties of Sorghum Seeds and Calibration of Discrete Element Modeling Parameters. Agriculture. 2022; 12(5): 681. doi: 10.3390/agriculture12050681

134. Huang H, Tian G, Chen C. Evaluating the Point Cloud of Individual Trees Generated from Images Based on Neural Radiance Fields (NeRF) Method. Remote Sensing. 2024; 16(6): 967. doi: 10.3390/rs16060967

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Ting Huang, Tao Wang, Ziang Niu, Chen Yang, Zixing Wu, Zhengjun Qiu

License URL: https://creativecommons.org/licenses/by/4.0/


This site is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).