Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 718
Filtrar
1.
Sensors (Basel) ; 24(17)2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39275409

RESUMEN

Three-dimensional point cloud registration is a critical task in 3D perception for sensors that aims to determine the optimal alignment between two point clouds by finding the best transformation. Existing methods like RANSAC and its variants often face challenges, such as sensitivity to low overlap rates, high computational costs, and susceptibility to outliers, leading to inaccurate results, especially in complex or noisy environments. In this paper, we introduce a novel 3D registration method, CL-PCR, inspired by the concept of maximal cliques and built upon the SC2-PCR framework. Our approach allows for the flexible use of smaller sampling subsets to extract more local consensus information, thereby generating accurate pose hypotheses even in scenarios with low overlap between point clouds. This method enhances robustness against low overlap and reduces the influence of outliers, addressing the limitations of traditional techniques. First, we construct a graph matrix to represent the compatibility relationships among the initial correspondences. Next, we build clique-likes subsets of various sizes within the graph matrix, each representing a consensus set. Then, we compute the transformation hypotheses for the subsets using the SVD algorithm and select the best hypothesis for registration based on evaluation metrics. Extensive experiments demonstrate the effectiveness of CL-PCR. In comparison experiments on the 3DMatch/3DLoMatch datasets using both FPFH and FCGF descriptors, our Fast-CL-PCRv1 outperforms state-of-the-art algorithms, achieving superior registration performance. Additionally, we validate the practicality and robustness of our method with real-world data.

2.
Sensors (Basel) ; 24(17)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39275488

RESUMEN

This study introduced a depth-sensing-based approach with robust algorithms for tracking relative morphological changes in the chests of patients undergoing physical therapy. The problem that was addressed was the periodic change in morphological parameters induced by breathing, and since the recording was continuous, the parameters were extracted for the moments of maximum and minimum volumes of the chest (inspiration and expiration moments), and analyzed. The parameters were derived from morphological transverse cross-sections (CSs), which were extracted for the moments of maximal and minimal depth variations, and the reliability of the results was expressed through the coefficient of variation (CV) of the resulting curves. Across all subjects and levels of observed anatomy, the mean CV for CS depth values was smaller than 2%, and the mean CV of the CS area was smaller than 1%. To prove the reproducibility of measurements (extraction of morphological parameters), 10 subjects were recorded in two consecutive sessions with a short interval (2 weeks) where no changes in the monitored parameters were expected and statistical methods show that there was no statistically significant difference between the sessions, which confirms the reproducibility hypothesis. Additionally, based on the representative CSs for inspiration and expirations moments, chest mobility in quiet breathing was examined, and the statistical test showed no difference between the two sessions. The findings justify the proposed algorithm as a valuable tool for evaluating the impact of rehabilitation exercises on chest morphology.


Asunto(s)
Algoritmos , Parálisis Cerebral , Tórax , Humanos , Parálisis Cerebral/fisiopatología , Parálisis Cerebral/patología , Niño , Masculino , Tórax/diagnóstico por imagen , Femenino , Respiración , Reproducibilidad de los Resultados
3.
Sensors (Basel) ; 24(17)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39275497

RESUMEN

Studies on autonomous driving have started to focus on snowy environments, and studies to acquire data and remove noise and pixels caused by snowfall in such environments are in progress. However, research to determine the necessary weather information for the control of unmanned platforms by sensing the degree of snowfall in real time has not yet been conducted. Therefore, in this study, we attempted to determine snowfall information for autonomous driving control in snowy weather conditions. To this end, snowfall data were acquired by LiDAR sensors in various snowy areas in South Korea, Sweden, and Denmark. Snow, which was extracted using a snow removal filter (the LIOR filter that we previously developed), was newly classified and defined based on the extracted number of snow particles, the actual snowfall total, and the weather forecast at the time. Finally, we developed an algorithm that extracts only snow in real time and then provides snowfall information to an autonomous driving system. This algorithm is expected to have a similar effect to that of actual controllers in promoting driving safety in real-time weather conditions.

4.
Sensors (Basel) ; 24(17)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39275606

RESUMEN

Short-range MEMS-based (Micro Electronical Mechanical System) LiDAR provides precise point cloud datasets for rock fragment surfaces. However, there is more vibrational noise in MEMS-based LiDAR signals, which cannot guarantee that the reconstructed point cloud data are not distorted with a high compression ratio. Many studies have illustrated that wavelet-based clustered compressive sensing can improve reconstruction precision. The k-means clustering algorithm can be conveniently employed to obtain clusters; however, estimating a meaningful k value (i.e., the number of clusters) is challenging. An excessive quantity of clusters is not necessary for dense point clouds, as this leads to elevated consumption of memory and CPU resources. For sparser point clouds, fewer clusters lead to more distortions, while excessive clusters lead to more voids in reconstructed point clouds. This study proposes a local clustering method to determine a number of clusters closer to the actual number based on GMM (Gaussian Mixture Model) observation distances and density peaks. Experimental results illustrate that the estimated number of clusters is closer to the actual number in four datasets from the KEEL public repository. In point cloud compression and recovery experiments, our proposed approach compresses and recovers the Bunny and Armadillo datasets in the Stanford 3D repository; the experimental results illustrate that our proposed approach improves reconstructed point clouds' geometry and curvature similarity. Furthermore, the geometric similarity increases to 0.9 above in our complete rock fragment surface datasets after selecting a better wavelet basis for each dimension of MEMS-based LiDAR signals. In both experiments, the sparsity of signals was 0.8 and the sampling ratio was 0.4. Finally, a rock outcrop point cloud data experiment is utilized to verify that the proposed approach is applicable for large-scale research objects. All of our experiments illustrate that the proposed adaptive clustered compressive sensing approach can better reconstruct MEMS-based LiDAR point clouds with a lower sampling ratio.

5.
Sensors (Basel) ; 24(17)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39275608

RESUMEN

Autonomous driving systems are a rapidly evolving technology. Trajectory prediction is a critical component of autonomous driving systems that enables safe navigation by anticipating the movement of surrounding objects. Lidar point-cloud data provide a 3D view of solid objects surrounding the ego-vehicle. Hence, trajectory prediction using Lidar point-cloud data performs better than 2D RGB cameras due to providing the distance between the target object and the ego-vehicle. However, processing point-cloud data is a costly and complicated process, and state-of-the-art 3D trajectory predictions using point-cloud data suffer from slow and erroneous predictions. State-of-the-art trajectory prediction approaches suffer from handcrafted and inefficient architectures, which can lead to low accuracy and suboptimal inference times. Neural architecture search (NAS) is a method proposed to optimize neural network models by using search algorithms to redesign architectures based on their performance and runtime. This paper introduces TrajectoryNAS, a novel neural architecture search (NAS) method designed to develop an efficient and more accurate LiDAR-based trajectory prediction model for predicting the trajectories of objects surrounding the ego vehicle. TrajectoryNAS systematically optimizes the architecture of an end-to-end trajectory prediction algorithm, incorporating all stacked components that are prerequisites for trajectory prediction, including object detection and object tracking, using metaheuristic algorithms. This approach addresses the neural architecture designs in each component of trajectory prediction, considering accuracy loss and the associated overhead latency. Our method introduces a novel multi-objective energy function that integrates accuracy and efficiency metrics, enabling the creation of a model that significantly outperforms existing approaches. Through empirical studies, TrajectoryNAS demonstrates its effectiveness in enhancing the performance of autonomous driving systems, marking a significant advancement in the field. Experimental results reveal that TrajcetoryNAS yields a minimum of 4.8 higger accuracy and 1.1* lower latency over competing methods on the NuScenes dataset.

6.
Sensors (Basel) ; 24(17)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39275604

RESUMEN

This work focuses on the improvement of the density peaks clustering (DPC) algorithm and its application to point cloud segmentation in LiDAR. The improvement of DPC focuses on avoiding the manual determination of the cut-off distance and the manual selection of cluster centers. And the clustering process of the improved DPC is automatic without manual intervention. The cut-off distance is avoided by forming a voxel structure and using the number of points in the voxel as the local density of the voxel. The automatic selection of cluster centers is realized by selecting the voxels whose gamma values are greater than the gamma value of the inflection point of the fitted γ curve as cluster centers. Finally, a new merging strategy is introduced to overcome the over-segmentation problem and obtain the final clustering result. To verify the effectiveness of the improved DPC, experiments on point cloud segmentation of LiDAR under different scenes were conducted. The basic DPC, K-means, and DBSCAN were introduced for comparison. The experimental results showed that the improved DPC is effective and its application to point cloud segmentation of LiDAR is successful. Compared with the basic DPC, K-means, the improved DPC has better clustering accuracy. And, compared with DBSCAN, the improved DPC has comparable or slightly better clustering accuracy without nontrivial parameters.

7.
Sensors (Basel) ; 24(17)2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39275696

RESUMEN

Fusing data from many sources helps to achieve improved analysis and results. In this work, we present a new algorithm to fuse data from multiple cameras with data from multiple lidars. This algorithm was developed to increase the sensitivity and specificity of autonomous vehicle perception systems, where the most accurate sensors measuring the vehicle's surroundings are cameras and lidar devices. Perception systems based on data from one type of sensor do not use complete information and have lower quality. The camera provides two-dimensional images; lidar produces three-dimensional point clouds. We developed a method for matching pixels on a pair of stereoscopic images using dynamic programming inspired by an algorithm to match sequences of amino acids used in bioinformatics. We improve the quality of the basic algorithm using additional data from edge detectors. Furthermore, we also improve the algorithm performance by reducing the size of matched pixels determined by available car speeds. We perform point cloud densification in the final step of our method, fusing lidar output data with stereo vision output. We implemented our algorithm in C++ with Python API, and we provided the open-source library named Stereo PCD. This library very efficiently fuses data from multiple cameras and multiple lidars. In the article, we present the results of our approach to benchmark databases in terms of quality and performance. We compare our algorithm with other popular methods.

8.
Animals (Basel) ; 14(17)2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39272242

RESUMEN

Non-contact measurement based on the 3D reconstruction of sheep bodies can alleviate the stress response in sheep during manual measurement of body dimensions. However, data collection is easily affected by environmental factors and noise, which is not conducive to practical production needs. To address this issue, this study proposes a non-contact data acquisition system and a 3D point cloud reconstruction method for sheep bodies. The collected sheep body data can provide reference data for sheep breeding and fattening. The acquisition system consists of a Kinect v2 depth camera group, a sheep passage, and a restraining pen, synchronously collecting data from three perspectives. The 3D point cloud reconstruction method for sheep bodies is implemented based on C++ language and the Point Cloud Library (PCL). It processes noise through pass-through filtering, statistical filtering, and random sample consensus (RANSAC). A conditional voxel filtering box is proposed to downsample and simplify the point cloud data. Combined with the RANSAC and Iterative Closest Point (ICP) algorithms, coarse and fine registration are performed to improve registration accuracy and robustness, achieving 3D reconstruction of sheep bodies. In the base, 135 sets of point cloud data were collected from 20 sheep. After 3D reconstruction, the reconstruction error of body length compared to the actual values was 0.79%, indicating that this method can provide reliable reference data for 3D point cloud reconstruction research of sheep bodies.

9.
Int J Mol Sci ; 25(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39273227

RESUMEN

Predicting protein-ligand binding sites is an integral part of structural biology and drug design. A comprehensive understanding of these binding sites is essential for advancing drug innovation, elucidating mechanisms of biological function, and exploring the nature of disease. However, accurately identifying protein-ligand binding sites remains a challenging task. To address this, we propose PGpocket, a geometric deep learning-based framework to improve protein-ligand binding site prediction. Initially, the protein surface is converted into a point cloud, and then the geometric and chemical properties of each point are calculated. Subsequently, the point cloud graph is constructed based on the inter-point distances, and the point cloud graph neural network (GNN) is applied to extract and analyze the protein surface information to predict potential binding sites. PGpocket is trained on the scPDB dataset, and its performance is verified on two independent test sets, Coach420 and HOLO4K. The results show that PGpocket achieves a 58% success rate on the Coach420 dataset and a 56% success rate on the HOLO4K dataset. These results surpass competing algorithms, demonstrating PGpocket's advancement and practicality for protein-ligand binding site prediction.


Asunto(s)
Redes Neurales de la Computación , Proteínas , Sitios de Unión , Ligandos , Proteínas/química , Proteínas/metabolismo , Unión Proteica , Algoritmos , Aprendizaje Profundo , Bases de Datos de Proteínas
10.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39275701

RESUMEN

In this study, a ring light point cloud calibration technique based on collimated laser beams is developed, aiming to reduce errors caused by the position and attitude changes of traditional ring light measurement devices. This article details the generation mechanism of the ring beam and the principle of deep hole measurement. It introduces the collimated beam as a reference, building on traditional ring light measurement devices, to achieve the synchronous acquisition of the ring beam and collimated spot images by an industrial camera. The Steger algorithm is employed to accurately extract the coordinates of the point cloud contours of both the ring beam and the collimated spot. By analyzing the shape and position changes of the collimated spot contour, the spatial position and attitude of the measuring device are precisely determined. This technique is applied to the 3D reconstruction of the inner surface of deep holes, ensuring the accurate restoration of the spatial positional attitude of the ring beam by incorporating the spatial positional attitude parameters of the measuring device to precisely calibrate the cross-sectional point cloud coordinates. Experimental results with ring gauges and deep hole workpieces demonstrate that this technique effectively reduces the percentage of point cloud data outside the tolerance range, and improves the accuracy of the 3D reconstruction model by 6.287%, thereby verifying the accuracy and practicality of this technique.

11.
Biomed Eng Lett ; 14(5): 1057-1068, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39220029

RESUMEN

The performance of conventional lung puncture surgery is a complex undertaking due to the surgeon's reliance on visual assessment of respiratory conditions and the manual execution of the technique while the patient maintains breath-holding. However, the failure to correctly perform a puncture technique can lead to negative outcomes, such as the development of sores and pneumothorax. In this work, we proposed a novel approach for monitoring respiratory motion by utilizing defect-aware point cloud registration and descriptor computation. Through a thorough examination of the attributes of the inputs, we suggest the incorporation of a defect detection branch into the registration network. Additionally, we developed two modules with the aim of augmenting the quality of the extracted features. A coarse-to-fine respiratory phase recognition approach based on descriptor computation is devised for the respiratory motion tracking. The efficacy of the suggested registration method is demonstrated through experimental findings conducted on both publicly accessible datasets and thoracoabdominal point cloud datasets. We obtained state-of-the-art registration results on ModelNet40 datasets, with 1.584∘ on rotation mean absolute error and 0.016 mm on translation mean absolute error, respectively. The experimental findings conducted on a thoracoabdominal point cloud dataset indicate that our method exhibits efficacy and efficiency, achieving a frame matching rate of 2 frames per second and a phase recognition accuracy of 96.3%. This allows identifying matching frames from template point clouds that display different parts of a patient's thoracoabdominal surface while breathing regularly to distinguish breathing stages and track breathing.

12.
Artículo en Inglés | MEDLINE | ID: mdl-39235752

RESUMEN

Shoe prints are one of the most common types of evidence found at crime scenes, second only to fingerprints. However, studies involving modern approaches such as machine learning and deep learning for the detection and analysis of shoe prints are quite limited in this field. With advancements in technology, positive results have recently emerged for the detection of 2D shoe prints. However, few studies focusing on 3D shoe prints. This study aims to use deep learning methods, specifically the PointNet architecture, for binary classification applications of 3D shoe prints, utilizing two different shoe brands. A 3D dataset created from 160 pairs of shoes was employed for this research. This dataset comprises 797 images from the Adidas brand and 2445 images from the Nike brand. The dataset used in the study includes worn shoe prints. According to the results obtained, the training phase achieved an accuracy of 96%, and the validation phase achieved an accuracy of 93%. These study results are highly positive and indicate promising potential for classifying 3D shoe prints. This study is described as the first classification study conducted using a deep learning method specifically on 3D shoe prints. It provides proof of concept that deep learning research can be conducted on 3D shoeprints. While the developed binary classification of these 3D shoeprints may not fully meet current forensic needs, it will serve as a source of motivation for future research and for the creation of 3D datasets intended for forensic purposes.

13.
Front Plant Sci ; 15: 1459968, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39224846

RESUMEN

Wheat exhibits complex characteristics during its growth, such as extensive tillering, slender and soft leaves, and severe organ cross-obscuration, posing a considerable challenge in full-cycle phenotypic monitoring. To address this, this study presents a synthesized method based on SFM-MVS (Structure-from-Motion, Multi-View Stereo) processing for handling and segmenting wheat point clouds, covering the entire growth cycle from seedling to grain filling stages. First, a multi-view image acquisition platform was constructed to capture image sequences of wheat plants, and dense point clouds were generated using SFM-MVS technology. High-quality dense point clouds were produced by implementing improved Euclidean clustering combined with centroids, color filtering, and statistical filtering methods. Subsequently, the segmentation of wheat plant stems and leaves was performed using the region growth segmentation algorithm. Although segmentation performance was suboptimal during the tillering, jointing, and booting stages due to the glut leaves and severe overlap, there was a salient improvement in wheat leaf segmentation efficiency over the entire growth cycle. Finally, phenotypic parameters were analyzed across different growth stages, comparing automated measurements of plant height, leaf length, and leaf width with actual measurements. The results demonstrated coefficients of determination ( R 2 ) of 0.9979, 0.9977, and 0.995; root mean square errors (RMSE) of 1.0773 cm, 0.2612 cm, and 0.0335 cm; and relative root mean square errors (RRMSE) of 2.1858%, 1.7483%, and 2.8462%, respectively. These results validate the reliability and accuracy of our proposed workflow in processing wheat point clouds and automatically extracting plant height, leaf length, and leaf width, indicating that our 3D reconstructed wheat model achieves high precision and can quickly, accurately, and non-destructively extract phenotypic parameters. Additionally, plant height, convex hull volume, plant surface area, and Crown area were extracted, providing a detailed analysis of dynamic changes in wheat throughout its growth cycle. ANOVA was conducted across different cultivars, accurately revealing significant differences at various growth stages. This study proposes a convenient, rapid, and quantitative analysis method, offering crucial technical support for wheat plant phenotypic analysis and growth dynamics monitoring, applicable for precise full-cycle phenotypic monitoring of wheat.

14.
Neural Netw ; 180: 106626, 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39173197

RESUMEN

Recently, point cloud domain adaptation (DA) practices have been implemented to improve the generalization ability of deep learning models on point cloud data. However, variations across domains often result in decreased performance of models trained on different distributed data sources. Previous studies have focused on output-level domain alignment to address this challenge. But this approach may increase the amount of errors experienced when aligning different domains, particularly for targets that would otherwise be predicted incorrectly. Therefore, in this study, we propose an input-level discretization-based matching to enhance the generalization ability of DA. Specifically, an efficient geometric deformation depth decoupling network (3DeNet) is implemented to learn the knowledge from the source domain and embed it into an implicit feature space, which facilitates the effective constraint of unsupervised predictions for downstream tasks. Secondly, we demonstrate that the sparsity within the implicit feature space varies between domains, rendering domain differences difficult to support. Consequently, we match sets of neighboring points with different densities and biases by differentiating the adaptive densities. Finally, inter-domain differences are aligned by constraining the loss originating from and between the target domains. We conduct experiments on point cloud DA datasets PointDA-10 and PointSegDA, achieving advanced results (over 1.2% and 1% on average).

15.
J Med Imaging Radiat Sci ; 55(4): 101729, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39128321

RESUMEN

PURPOSE: To construct a tumor motion monitoring model for stereotactic body radiation therapy (SBRT) of lung cancer from a feasibility perspective. METHODS: A total of 32 treatment plans for 22 patients were collected, whose planning CT and the centroid position of the planning target volume (PTV) were used as the reference. Images of different respiratory phases in 4DCT were acquired to redefine the targets and obtain the floating PTV centroid positions. In accordance with the planning CT and CBCT registration parameters, data augmentation was accomplished, yielding 2130 experimental recordings for analysis. We employed a stacking multi-learning ensemble approach to fit the 3D point cloud variations of body surface and the change of target position to construct the tumor motion monitoring model, and the prediction accuracy was assess using root mean squared error (RMSE) and R-Square (R2). RESULTS: The prediction displacement of the stacking ensemble model shows a high degree of agreement with the reference value in each direction. In the first layer of model, the X direction (RMSE =0.019 ∼ 0.145mm, R2 =0.9793∼0.9996) and the Z direction (RMSE = 0.051 ∼ 0.168 mm, R2 = 0.9736∼0.9976) show the best results, while the Y direction ranked behind (RMSE = 0.088 ∼ 0.224 mm, R2 = 0.9553∼ 0.9933). The second layer model summarizes the advantages of unit models of first layer, and RMSE of 0.015 mm, 0.083 mm, 0.041 mm, and R2 of 0.9998, 0.9931, 0.9984 respectively for X, Y, Z were obtained. CONCLUSIONS: The tumor motion monitoring method for SBRT of lung cancer has potential application of non-ionization, non-invasive, markerless, and real-time.

16.
Technol Cancer Res Treat ; 23: 15330338241273149, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39155658

RESUMEN

Objectives: Part of the tumor localization methods in radiotherapy have poor real-time performance and may generate additional radiation. We propose a multimodal point cloud-based method for tumor localization in robotic ultrasound-guided radiotherapy, which only irradiates computed tomography (CT) during radiotherapy planning to avoid additional radiation. Methods: The tumor position was determined using the CT point cloud, and the red green blue depth (RGBD) point cloud was used to determine body surface scanning location corresponding to the tumor location. The relationship between the CT point cloud and RGBD point cloud was established through multi-modal point cloud registration. The point cloud was then used for robot tumor localization through coordinate transformation between camera and robot. Results: The maximum mean absolute error of the tumor location in the X, Y, and Z directions of the robot coordinate system were 0.781, 1.334, and 1.490 mm, respectively. The average point-to-point translation mean absolute error between the actual and predicted positions of the localization points was 1.847 mm. The maximum error in the random positioning experiment was 1.77 mm. Conclusion: The proposed method is radiation free and has real-time performance, with tumor localization accuracy that meets the requirements of radiotherapy. The proposed method, which potentially reduces the risks associated with radiation exposure while ensuring efficient and accurate tumor localization, represents a promising advancement in the field of radiotherapy.


Asunto(s)
Neoplasias , Planificación de la Radioterapia Asistida por Computador , Radioterapia Guiada por Imagen , Tomografía Computarizada por Rayos X , Humanos , Radioterapia Guiada por Imagen/métodos , Neoplasias/radioterapia , Neoplasias/diagnóstico por imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos , Algoritmos , Fantasmas de Imagen , Robótica/métodos , Procedimientos Quirúrgicos Robotizados/métodos
17.
Sensors (Basel) ; 24(15)2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39124021

RESUMEN

LiDAR offers a wide range of uses in autonomous driving, remote sensing, urban planning, and other areas. The laser 3D point cloud acquired by LiDAR typically encounters issues during registration, including laser speckle noise, Gaussian noise, data loss, and data disorder. This work suggests a novel Student's t-distribution point cloud registration algorithm based on the local features of point clouds to address these issues. The approach uses Student's t-distribution mixture model (SMM) to generate the probability distribution of point cloud registration, which can accurately describe the data distribution, in order to tackle the problem of the missing laser 3D point cloud data and data disorder. Owing to the disparity in the point cloud registration task, a full-rank covariance matrix is built based on the local features of the point cloud during the objective function design process. The combined penalty of point-to-point and point-to-plane distance is then added to the objective function adaptively. Simultaneously, by analyzing the imaging characteristics of LiDAR, according to the influence of the laser waveform and detector on the LiDAR imaging, the composite weight coefficient is added to improve the pertinence of the algorithm. Based on the public dataset and the laser 3D point cloud dataset acquired in the laboratory, the experimental findings demonstrate that the proposed algorithm has high practicability and dependability and outperforms the five comparison algorithms in terms of accuracy and robustness.

18.
Plants (Basel) ; 13(16)2024 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-39204736

RESUMEN

Accurate segmentation of the stem of pumpkin seedlings has a great influence on the modernization of pumpkin cultivation, and can provide detailed data support for the growth of pumpkin plants. We collected and constructed a pumpkin seedling point cloud dataset for the first time. Potting soil and wall background in point cloud data often interfere with the accuracy of partial cutting of pumpkin seedling stems. The stem shape of pumpkin seedlings varies due to other environmental factors during the growing stage. The stem of the pumpkin seedling is closely connected with the potting soil and leaves, and the boundary of the stem is easily blurred. These problems bring challenges to the accurate segmentation of pumpkin seedling point cloud stems. In this paper, an accurate segmentation algorithm for pumpkin seedling point cloud stems based on CPHNet is proposed. First, a channel residual attention multilayer perceptron (CRA-MLP) module is proposed, which suppresses background interference such as soil. Second, a position-enhanced self-attention (PESA) mechanism is proposed, enabling the model to adapt to diverse morphologies of pumpkin seedling point cloud data stems. Finally, a hybrid loss function of cross entropy loss and dice loss (HCE-Dice Loss) is proposed to address the issue of fuzzy stem boundaries. The experimental results show that CPHNet achieves a 90.4% average cross-to-merge ratio (mIoU), 93.1% average accuracy (mP), 95.6% average recall rate (mR), 94.4% F1 score (mF1) and 0.03 plants/second (speed) on the self-built dataset. Compared with other popular segmentation models, this model is more accurate and stable for cutting the stem part of the pumpkin seedling point cloud.

19.
Sensors (Basel) ; 24(16)2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39204801

RESUMEN

Grain is a common bulk cargo. To ensure optimal utilization of transportation space and prevent overflow accidents, it is necessary to observe the grain's shape and determine the loading status during the loading process. Traditional methods often rely on manual judgment, which results in high labor intensity, poor safety, and low loading efficiency. Therefore, this paper proposes a method for recognizing the bulk grain-loading status based on Light Detection and Ranging (LiDAR). This method uses LiDAR to obtain point cloud data and constructs a deep learning network to perform target recognition and component segmentation on loading vehicles, extract vehicle positions and grain shapes, and recognize and make known the bulk grain-loading status. Based on the measured point cloud data of bulk grain loading, in the point cloud-classification task, the overall accuracy is 97.9% and the mean accuracy is 98.1%. In the vehicle component-segmentation task, the overall accuracy is 99.1% and the Mean Intersection over Union is 96.6%. The results indicate that the method has reliable performance in the research tasks of extracting vehicle positions, detecting grain shapes, and recognizing loading status.

20.
Sensors (Basel) ; 24(16)2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39205049

RESUMEN

Robots need to sense information about the external environment before moving, which helps them to recognize and understand their surroundings so that they can plan safe and effective paths and avoid obstacles. Conventional algorithms using a single sensor cannot obtain enough information and lack real-time capabilities. To solve these problems, we propose an information perception algorithm with vision as the core and the fusion of LiDAR. Regarding vision, we propose the YOLO-SCG model, which is able to detect objects faster and more accurately. When processing point clouds, we integrate the detection results of vision for local clustering, improving both the processing speed of the point cloud and the detection effectiveness. Experiments verify that our proposed YOLO-SCG algorithm improves accuracy by 4.06% and detection speed by 7.81% compared to YOLOv9, and our algorithm excels in distinguishing different objects in the clustering of point clouds.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA