Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Adv Sci (Weinh) ; 11(28): e2308886, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38725135

RESUMEN

Efficiently generating 3D holograms is one of the most challenging research topics in the field of holography. This work introduces a method for generating multi-depth phase-only holograms using a fully convolutional neural network (FCN). The method primarily involves a forward-backward-diffraction framework to compute multi-depth diffraction fields, along with a layer-by-layer replacement method (L2RM) to handle occlusion relationships. The diffraction fields computed by the former are fed into the carefully designed FCN, which leverages its powerful non-linear fitting capability to generate multi-depth holograms of 3D scenes. The latter can smooth the boundaries of different layers in scene reconstruction by complementing information of occluded objects, thus enhancing the reconstruction quality of holograms. The proposed method can generate a multi-depth 3D hologram with a PSNR of 31.8 dB in just 90 ms for a resolution of 2160 × 3840 on the NVIDIA Tesla A100 40G tensor core GPU. Additionally, numerical and experimental results indicate that the generated holograms accurately reconstruct clear 3D scenes with correct occlusion relationships and provide excellent depth focusing.

2.
Front Robot AI ; 11: 1359887, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38680621

RESUMEN

Autonomous navigation in agricultural fields presents a unique challenge due to the unpredictable outdoor environment. Various approaches have been explored to tackle this task, each with its own set of challenges. These include GPS guidance, which faces availability issues and struggles to avoid obstacles, and vision guidance techniques, which are sensitive to changes in light, weeds, and crop growth. This study proposes a novel idea that combining GPS and visual navigation offers an optimal solution for autonomous navigation in agricultural fields. Three solutions for autonomous navigation in cotton fields were developed and evaluated. The first solution utilized a path tracking algorithm, Pure Pursuit, to follow GPS coordinates and guide a mobile robot. It achieved an average lateral deviation of 8.3 cm from the pre-recorded path. The second solution employed a deep learning model, specifically a fully convolutional neural network for semantic segmentation, to detect paths between cotton rows. The mobile rover then navigated using the Dynamic Window Approach (DWA) path planning algorithm, achieving an average lateral deviation of 4.8 cm from the desired path. Finally, the two solutions were integrated for a more practical approach. GPS served as a global planner to map the field, while the deep learning model and DWA acted as a local planner for navigation and real-time decision-making. This integrated solution enabled the robot to navigate between cotton rows with an average lateral distance error of 9.5 cm, offering a more practical method for autonomous navigation in cotton fields.

3.
J Bone Oncol ; 45: 100593, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38495379

RESUMEN

Background and objective: Pelvic bone tumors represent a harmful orthopedic condition, encompassing both benign and malignant forms. Addressing the issue of limited accuracy in current machine learning algorithms for bone tumor image segmentation, we have developed an enhanced bone tumor image segmentation algorithm. This algorithm is built upon an improved full convolutional neural network, incorporating both the fully convolutional neural network (FCNN-4s) and a conditional random field (CRF) to achieve more precise segmentation. Methodology: The enhanced fully convolutional neural network (FCNN-4s) was employed to conduct initial segmentation on preprocessed images. Following each convolutional layer, batch normalization layers were introduced to expedite network training convergence and enhance the accuracy of the trained model. Subsequently, a fully connected conditional random field (CRF) was integrated to fine-tune the segmentation results, refining the boundaries of pelvic bone tumors and achieving high-quality segmentation. Results: The experimental outcomes demonstrate a significant enhancement in segmentation accuracy and stability when compared to the conventional convolutional neural network bone tumor image segmentation algorithm. The algorithm achieves an average Dice coefficient of 93.31 %, indicating superior performance in real-time operations. Conclusion: In contrast to the conventional convolutional neural network segmentation algorithm, the algorithm presented in this paper boasts a more intricate structure, proficiently addressing issues of over-segmentation and under-segmentation in pelvic bone tumor segmentation. This segmentation model exhibits superior real-time performance, robust stability, and is capable of achieving heightened segmentation accuracy.

4.
Heliyon ; 10(3): e25030, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38318024

RESUMEN

Objective: This study trains a U-shaped fully convolutional neural network (U-Net) model based on peripheral contour measures to achieve rapid, accurate, automated identification and segmentation of periprostatic adipose tissue (PPAT). Methods: Currently, no studies are using deep learning methods to discriminate and segment periprostatic adipose tissue. This paper proposes a novel and modified, U-shaped convolutional neural network contour control points on a small number of datasets of MRI T2W images of PPAT combined with its gradient images as a feature learning method to reduce feature ambiguity caused by the differences in PPAT contours of different patients. This paper adopts a supervised learning method on the labeled dataset, combining the probability and spatial distribution of control points, and proposes a weighted loss function to optimize the neural network's convergence speed and detection performance. Based on high-precision detection of control points, this paper uses a convex curve fitting to obtain the final PPAT contour. The imaging segmentation results were compared with those of a fully convolutional network (FCN), U-Net, and semantic segmentation convolutional network (SegNet) on three evaluation metrics: Dice similarity coefficient (DSC), Hausdorff distance (HD), and intersection over union ratio (IoU). Results: Cropped images with a 270 × 270-pixel matrix had DSC, HD, and IoU values of 70.1%, 27 mm, and 56.1%, respectively; downscaled images with a 256 × 256-pixel matrix had 68.7%, 26.7 mm, and 54.1%. A U-Net network based on peripheral contour characteristics predicted the complete periprostatic adipose tissue contours on T2W images at different levels. FCN, U-Net, and SegNet could not completely predict them. Conclusion: This U-Net convolutional neural network based on peripheral contour features can identify and segment periprostatic adipose tissue quite well. Cropped images with a 270 × 270-pixel matrix are more appropriate for use with the U-Net convolutional neural network based on contour features; reducing the resolution of the original image will lower the accuracy of the U-Net convolutional neural network. FCN and SegNet are not appropriate for identifying PPAT on T2 sequence MR images. Our method can automatically segment PPAT rapidly and accurately, laying a foundation for PPAT image analysis.

5.
BMC Med Imaging ; 23(1): 124, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37700250

RESUMEN

BACKGROUND: Brain extraction is an essential prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion recognition, location, and segmentation. Segmentation using a fully convolutional neural network (FCN) yields high accuracy but a relatively slow extraction speed. METHODS: This paper proposes an integrated algorithm, FABEM, to address the above issues. This method first uses threshold segmentation, closed operation, convolutional neural network (CNN), and image filling to generate a specific mask. Then, it detects the number of connected regions of the mask. If the number of connected regions equals 1, the extraction is done by directly multiplying with the original image. Otherwise, the mask was further segmented using the region growth method for original images with single-region brain distribution. Conversely, for images with multi-region brain distribution, Deeplabv3 + is used to adjust the mask. Finally, the mask is multiplied with the original image to complete the extraction. RESULTS: The algorithm and 5 FCN models were tested on 24 datasets containing different lesions, and the algorithm's performance showed MPA = 0.9968, MIoU = 0.9936, and MBF = 0.9963, comparable to the Deeplabv3+. Still, its extraction speed is much faster than the Deeplabv3+. It can complete the brain extraction of a head CT image in about 0.43 s, about 3.8 times that of the Deeplabv3+. CONCLUSION: Thus, this method can achieve accurate brain extraction from head CT images faster, creating a good basis for subsequent brain volume measurement and feature extraction of intracranial lesions.


Asunto(s)
Algoritmos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
6.
J Bone Oncol ; 42: 100502, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37736418

RESUMEN

Background and objective: Bone tumor is a kind of harmful orthopedic disease, there are benign and malignant points. Aiming at the problem that the accuracy of the existing machine learning algorithm for bone tumor image segmentation is not high, a bone tumor image segmentation algorithm based on improved full convolutional neural network which consists fully convolutional neural network (FCNN-4s) and conditional random field (CRF). Methodology: The improved fully convolutional neural network (FCNN-4s) was used to perform coarse segmentation on preprocessed images. Batch normalization layers were added after each convolutional layer to accelerate the convergence speed of network training and improve the accuracy of the trained model. Then, a fully connected conditional random field (CRF) was fused to refine the bone tumor boundary in the coarse segmentation results, achieving the fine segmentation effect. Results: The experimental results show that compared with the traditional convolutional neural network bone tumor image segmentation algorithm, the algorithm has a great improvement in segmentation accuracy and stability, the average Dice can reach 91.56%, the real-time performance is better. Conclusion: Compared with the traditional convolutional neural network segmentation algorithm, the algorithm in this paper has a more refined structure, which can effectively solve the problem of over-segmentation and under-segmentation of bone tumors. The segmentation prediction has better real-time performance, strong stability, and can achieve higher segmentation accuracy.

7.
3D Print Addit Manuf ; 10(4): 723-731, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37609591

RESUMEN

Laser welding quality forecast is highly significant during the laser manufacturing process. However, extracting the dynamic characteristics of the molten pool in the short laser welding process makes predicting of the welding quality in real time difficult. Accordingly, this study proposes a multimodel quality forecast (MMQF) method based on dynamic geometric features of molten pool to forecast the welding quality in real time. For extraction of geometric features of molten pool, an improved fully convolutional neural network is proposed to segment the collected dynamic molten pool images during the entire welding process. In addition, several dynamic geometric features of the molten pool are extracted by using the minimum enclosed rectangle algorithm with an evaluation of the performance by several statistical indexes. With regard to forecasting the welding quality, a nonlinear quadratic kernel logistic regression model is proposed by mapping the linear inseparable features to the high dimensional space. Experimental results show that the MMQF method can make an effective and stable forecast of welding quality. It performs well under small data and can satisfy the requirement of real-time forecast.

8.
Comput Biol Med ; 161: 107021, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37216775

RESUMEN

Magnetic resonance imaging is a fundamental tool to reach a diagnosis of multiple sclerosis and monitoring its progression. Although several attempts have been made to segment multiple sclerosis lesions using artificial intelligence, fully automated analysis is not yet available. State-of-the-art methods rely on slight variations in segmentation architectures (e.g. U-Net, etc.). However, recent research has demonstrated how exploiting temporal-aware features and attention mechanisms can provide a significant boost to traditional architectures. This paper proposes a framework that exploits an augmented U-Net architecture with a convolutional long short-term memory layer and attention mechanism which is able to segment and quantify multiple sclerosis lesions detected in magnetic resonance images. Quantitative and qualitative evaluation on challenging examples demonstrated how the method outperforms previous state-of-the-art approaches, reporting an overall Dice score of 89% and also demonstrating robustness and generalization ability on never seen new test samples of a new dedicated under construction dataset.


Asunto(s)
Esclerosis Múltiple , Redes Neurales de la Computación , Humanos , Inteligencia Artificial , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/patología , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
9.
Sensors (Basel) ; 23(1)2023 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36617106

RESUMEN

A key element in an automated visual inspection system for concrete structures is identifying the geometric properties of surface defects such as cracks. Fully convolutional neural networks (FCNs) have been demonstrated to be powerful tools for crack segmentation in inspection images. However, the performance of FCNs depends on the size of the dataset that they are trained with. In the absence of large datasets of labeled images for concrete crack segmentation, these networks may lose their excellent prediction accuracy when tested on a new target dataset with different image conditions. In this study, firstly, a Transfer Learning approach is developed to enable the networks better distinguish cracks from background pixels. A synthetic dataset is generated and utilized to fine-tune a U-Net that is pre-trained with a public dataset. In the proposed data synthesis approach, which is based on CutMix data augmentation, the crack images from the public dataset are combined with the background images of a potential target dataset. Secondly, since cracks propagate over time, for sequential images of concrete surfaces, a novel temporal data fusion technique is proposed. In this technique, the network's predictions from multiple time steps are aggregated to improve the recall of predictions. It is shown that application of the proposed improvements has increased the F1-score and mIoU by 28.4% and 22.2%, respectively, which is a significant enhancement in performance of the segmentation network.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
10.
Cereb Cortex ; 33(4): 933-947, 2023 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-35332916

RESUMEN

Recently, the functional roles of the human cortical folding patterns have attracted increasing interest in the neuroimaging community. However, most existing studies have focused on the gyro-sulcal functional relationship on a whole-brain scale but possibly overlooked the localized and subtle functional differences of brain networks. Actually, accumulating evidences suggest that functional brain networks are the basic unit to realize the brain function; thus, the functional relationships between gyri and sulci still need to be further explored within different functional brain networks. Inspired by these evidences, we proposed a novel intrinsic connectivity network (ICN)-guided pooling-trimmed convolutional neural network (I-ptFCN) to revisit the functional difference between gyri and sulci. By testing the proposed model on the task functional magnetic resonance imaging (fMRI) datasets of the Human Connectome Project, we found that the classification accuracy of gyral and sulcal fMRI signals varied significantly for different ICNs, indicating functional heterogeneity of cortical folding patterns in different brain networks. The heterogeneity may be contributed by sulci, as only sulcal signals show heterogeneous frequency features across different ICNs, whereas the frequency features of gyri are homogeneous. These results offer novel insights into the functional difference between gyri and sulci and enlighten the functional roles of cortical folding patterns.


Asunto(s)
Corteza Cerebral , Conectoma , Humanos , Corteza Cerebral/diagnóstico por imagen , Conectoma/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación
11.
Sensors (Basel) ; 22(24)2022 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-36560049

RESUMEN

Path planning plays an important role in navigation and motion planning for robotics and automated driving applications. Most existing methods use iterative frameworks to calculate and plan the optimal path from the starting point to the endpoint. Iterative planning algorithms can be slow on large maps or long paths. This work introduces an end-to-end path-planning algorithm based on a fully convolutional neural network (FCNN) for grid maps with the concept of the traversability cost, and this trains a general path-planning model for 10 × 10 to 80 × 80 square and rectangular maps. The algorithm outputs the lowest-cost path while considering the cost and the shortest path without considering the cost. The FCNN model analyzes the grid map information and outputs two probability maps, which show the probability of each point in the lowest-cost path and the shortest path. Based on the probability maps, the actual optimal path is reconstructed by using the highest probability method. The proposed method has superior speed advantages over traditional algorithms. On test maps of different sizes and shapes, for the lowest-cost path and the shortest path, the average optimal rates were 72.7% and 78.2%, the average success rates were 95.1% and 92.5%, and the average length rates were 1.04 and 1.03, respectively.


Asunto(s)
Vehículos Autónomos , Robótica , Algoritmos , Redes Neurales de la Computación , Robótica/métodos , Movimiento (Física)
12.
Med Image Anal ; 81: 102534, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35842977

RESUMEN

Diabetic retinopathy (DR) is one of the most important complications of diabetes. Accurate segmentation of DR lesions is of great importance for the early diagnosis of DR. However, simultaneous segmentation of multi-type DR lesions is technically challenging because of 1) the lack of pixel-level annotations and 2) the large diversity between different types of DR lesions. In this study, first, we propose a novel Poisson-blending data augmentation (PBDA) algorithm to generate synthetic images, which can be easily utilized to expand the existing training data for lesion segmentation. We perform extensive experiments to recognize the important attributes in the PBDA algorithm. We show that position constraints are of great importance and that the synthesis density of one type of lesion has a joint influence on the segmentation of other types of lesions. Second, we propose a convolutional neural network architecture, named DSR-U-Net++ (i.e., DC-SC residual U-Net++), for the simultaneous segmentation of multi-type DR lesions. Ablation studies showed that the mean area under precision recall curve (AUPR) for all four types of lesions increased by >5% with PBDA. The proposed DSR-U-Net++ with PBDA outperformed the state-of-the-art methods by 1.7%-9.9% on the Indian Diabetic Retinopathy Image Dataset (IDRiD) and 67.3% on the e-ophtha dataset with respect to mean AUPR. The developed method would be an efficient tool to generate large-scale task-specific training data for other medical anomaly segmentation tasks.


Asunto(s)
Retinopatía Diabética , Algoritmos , Retinopatía Diabética/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
13.
PeerJ Comput Sci ; 8: e895, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35494812

RESUMEN

This research enhances crowd analysis by focusing on excessive crowd analysis and crowd density predictions for Hajj and Umrah pilgrimages. Crowd analysis usually analyzes the number of objects within an image or a frame in the videos and is regularly solved by estimating the density generated from the object location annotations. However, it suffers from low accuracy when the crowd is far away from the surveillance camera. This research proposes an approach to overcome the problem of estimating crowd density taken by a surveillance camera at a distance. The proposed approach employs a fully convolutional neural network (FCNN)-based method to monitor crowd analysis, especially for the classification of crowd density. This study aims to address the current technological challenges faced in video analysis in a scenario where the movement of large numbers of pilgrims with densities ranging between 7 and 8 per square meter. To address this challenge, this study aims to develop a new dataset based on the Hajj pilgrimage scenario. To validate the proposed method, the proposed model is compared with existing models using existing datasets. The proposed FCNN based method achieved a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. Additionally, The ResNet based method obtained final accuracy of 97%, 89%, and 97% for the proposed dataset, UCSD dataset, and JHU-CROWD dataset, respectively. The proposed Hajj-Crowd-2021 crowd analysis dataset and the model outperformed the other state-of-the-art datasets and models in most cases.

14.
Brief Bioinform ; 23(2)2022 03 10.
Artículo en Inglés | MEDLINE | ID: mdl-35212357

RESUMEN

Structural information for chemical compounds is often described by pictorial images in most scientific documents, which cannot be easily understood and manipulated by computers. This dilemma makes optical chemical structure recognition (OCSR) an essential tool for automatically mining knowledge from an enormous amount of literature. However, existing OCSR methods fall far short of our expectations for realistic requirements due to their poor recovery accuracy. In this paper, we developed a deep neural network model named ABC-Net (Atom and Bond Center Network) to predict graph structures directly. Based on the divide-and-conquer principle, we propose to model an atom or a bond as a single point in the center. In this way, we can leverage a fully convolutional neural network (CNN) to generate a series of heat-maps to identify these points and predict relevant properties, such as atom types, atom charges, bond types and other properties. Thus, the molecular structure can be recovered by assembling the detected atoms and bonds. Our approach integrates all the detection and property prediction tasks into a single fully CNN, which is scalable and capable of processing molecular images quite efficiently. Experimental results demonstrate that our method could achieve a significant improvement in recognition performance compared with publicly available tools. The proposed method could be considered as a promising solution to OCSR problems and a starting point for the acquisition of molecular information in the literature.


Asunto(s)
Aprendizaje Profundo , Estructura Molecular , Redes Neurales de la Computación
15.
Comput Methods Programs Biomed ; 215: 106616, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35026623

RESUMEN

BACKGROUND AND OBJECTIVE: We propose a novel deep neural network, the 3D Multi-Scale Residual Fully Convolutional Neural Network (3D-MS-RFCNN) to improve segmentation in extremely large-sized kidney tumors. METHOD: The multi-scale approach with a deep neural network is applied to capture global contextual features. Our method, 3D-MS-RFCNN, consists of two encoders and one decoder as a single complete network. One of the encoders is designed for capturing global contextual information by using the low-resolution, down-sampled data from input images. In the decoder, features from the encoder for global contextual features are concatenated with up-sampled features from the previous layer and features from the other encoder. Ensemble learning strategy is also applied. RESULTS: We evaluated the performance of our proposed method using the KiTS public dataset and the in-house hospital dataset. When compared with the state-of-the-art method, Res3D U-Net, our model, 3D-MS-RFCNN, demonstrated greater accuracy (0.9390 dice score for KiTS dataset and 0.8575 dice score for external dataset) for segmenting extremely large-sized kidney tumors. CONCLUSIONS: Our proposed network shows significantly improved segmentation performance of extremely large-sized targets. This study can be usefully employed in the field of medical image analysis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Renales , Humanos , Neoplasias Renales/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
16.
Med Phys ; 49(3): 1635-1647, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35083756

RESUMEN

BACKGROUND: Chemical exchange saturation transfer (CEST) MRI is a promising imaging modality in ischemic stroke detection due to its sensitivity in sensing postischemic pH alteration. However, the accurate segmentation of pH-altered regions remains difficult due to the complicated sources in water signal changes of CEST MRI. Meanwhile, manual localization and quantification of stroke lesions are laborious and time-consuming, which cannot meet the urgent need for timely therapeutic interventions. PURPOSE: The goal of this study was to develop an automatic lesion segmentation approach of the ischemic region based on CEST MR images. A novel segmentation framework based on the fully convolutional neural network was investigated in our task. METHODS: Z-spectra from 10 rats were manually labeled as ground truth and split into two datasets, where the training dataset including 3 rats was used to generate a segmentation model, and the remaining rats were used as test datasets to evaluate the model's performance. Then a 1D fully convolutional neural network equipped with bottleneck structures was set up, and a Grad-CAM approach was used to produce a coarse localization map, which can reflect the relevancy to the "ischemia" class of each pixel. RESULTS: As compared with the ground truth, the proposed network model achieved satisfying segmentation results with high values of evaluation metrics including specificity (SPE), sensitivity (SEN), accuracy (ACC), and Dice similarity coefficient (DSC), especially in some intractable situations where conventional MRI modalities and CEST quantitative method failed to distinguish between ischemic and normal tissues; also the model with augmentation was robust to input perturbations. The Grad-CAM maps performed clear tissue change distributions and interpreted the segmentations, showed a strong correlation with the quantitative method, and gave extended thinking to the function of networks. CONCLUSIONS: The proposed method can segment ischemia region from CEST images, with the Grad-CAM maps giving access to interpretative information about the segmentations, which demonstrates great potential in clinical routines.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Accidente Cerebrovascular Isquémico , Imagen por Resonancia Magnética , Animales , Procesamiento de Imagen Asistido por Computador/métodos , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Ratas
17.
Interdiscip Sci ; 14(1): 34-44, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34224083

RESUMEN

The disease Alzheimer is an irrepressible neurologicalbrain disorder. Earlier detection and proper treatment of Alzheimer's disease can help for brain tissue damage prevention. The study was intended to explore the segmentation effects of convolutional neural network (CNN) model on Magnetic Resonance (MR) imaging for Alzheimer's diagnosis and nursing. Specifically, 18 Alzheimer's patients admitted to Indira Gandhi Medical College (IGMC) hospital were selected as the experimental group, with 18 healthy volunteers in the Ctrl group. Furthermore, the CNN model was applied to segment the MR imaging of Alzheimer's patients, and its segmentation effects were compared with those of the fully convolutional neural network (FCNN) and support vector machine (SVM) algorithms. It was found that the CNN model demonstrated higher segmentation precision, and the experimental group showed a higher clinical dementia rating (CDR) score and a lower mini-mental state examination (MMSE) score (P < 0.05). The size of parahippocompalgyrus and putamen was bigger in the Ctrl (P < 0.05). In experimental group, the amplitude of low-frequency fluctuation (ALFF) was positively correlated with the MMSE score in areas of bilateral cingulum gyri (r = 0.65) and precuneus (r = 0.59). In conclusion, the grey matter structure is damaged in Alzheimer's patients, and hippocampus ALFF and regional homogeneity (ReHo) is involved in the neuronal compensation mechanism of hippocampal damage, and the caregivers should take an active nursing method.


Asunto(s)
Enfermedad de Alzheimer , Redes Neurales de la Computación , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/enfermería , Humanos , Imagen por Resonancia Magnética/métodos , Máquina de Vectores de Soporte
18.
Int J Comput Assist Radiol Surg ; 16(10): 1785-1794, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34287750

RESUMEN

PURPOSE: In cranio-maxillofacial surgery, it is of great clinical significance to segment mandible accurately and automatically from CT images. However, the connected region and blurred boundary in teeth and condyles make the process challenging. At present, the mandible is commonly segmented by experienced doctors using manually or semi-automatic methods, which is time-consuming and has poor segmentation consistency. In addition, existing automatic segmentation methods still have problems such as region misjudgment, low accuracy, and time-consuming. METHODS: For these issues, an automatic mandibular segmentation method using 3d fully convolutional neural network based on densely connected atrous spatial pyramid pooling (DenseASPP) and attention gates (AG) was proposed in this paper. Firstly, the DenseASPP module was added to the network for extracting dense features at multiple scales. Thereafter, the AG module was applied in each skip connection to diminish irrelevant background information and make the network focus on segmentation regions. Finally, a loss function combining dice coefficient and focal loss was used to solve the imbalance among sample categories. RESULTS: Test results showed that the proposed network obtained a relatively good segmentation result, with a Dice score of 97.588 ± 0.425%, Intersection over Union of 95.293 ± 0.812%, sensitivity of 96.252 ± 1.106%, average surface distance of 0.065 ± 0.020 mm and 95% Hausdorff distance of 0.491 ± 0.021 mm in segmentation accuracy. The comparison with other segmentation networks showed that our network not only had a relatively high segmentation accuracy but also effectively reduced the network's misjudgment. Meantime, the surface distance error also showed that our segmentation results were relatively close to the ground truth. CONCLUSION: The proposed network has better segmentation performance and realizes accurate and automatic segmentation of the mandible. Furthermore, its segmentation time is 50.43 s for one CT scan, which greatly improves the doctor's work efficiency. It will have practical significance in cranio-maxillofacial surgery in the future.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Atención , Humanos , Mandíbula/diagnóstico por imagen , Tomografía Computarizada por Rayos X
19.
Comput Med Imaging Graph ; 90: 101923, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33894669

RESUMEN

This paper addresses the problem of liver cancer segmentation in Whole Slide Images (WSIs). We propose a multi-scale image processing method based on an automatic end-to-end deep neural network algorithm for the segmentation of cancerous areas. A seven-level gaussian pyramid representation of the histopathological image was built to provide the texture information at different scales. In this work, several neural architectures were compared using the original image level for the training procedure. The proposed method is based on U-Net applied to seven levels of various resolutions (pyramidal subsampling). The predictions in different levels are combined through a voting mechanism. The final segmentation result is generated at the original image level. Partial color normalization and the weighted overlapping method were applied in preprocessing and prediction separately. The results show the effectiveness of the proposed multi-scale approach which achieved better scores than state-of-the-art methods.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
20.
Phys Med Biol ; 66(7)2021 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-33652418

RESUMEN

Ultrasound localization microscopy (ULM) has been proposed to image microvasculature beyond the ultrasound diffraction limit. Although ULM can attain microvascular images with a sub-diffraction resolution, long data acquisition time and processing time are the critical limitations. Deep learning-based ULM (deep-ULM) has been proposed to mitigate these limitations. However, microbubble (MB) localization used in deep-ULMs is currently based on spatial information without the use of temporal information. The highly spatiotemporally coherent MB signals provide a strong feature that can be used to differentiate MB signals from background artifacts. In this study, a deep neural network was employed and trained with spatiotemporal ultrasound datasets to better identify the MB signals by leveraging both the spatial and temporal information of the MB signals. Training, validation and testing datasets were acquired from MB suspension to mimic the realistic intensity-varying and moving MB signals. The performance of the proposed network was first demonstrated in the chicken embryo chorioallantoic membrane dataset with an optical microscopic image as the reference standard. Substantial improvement in spatial resolution was shown for the reconstructed super-resolved images compared with power Doppler images. The full-width-half-maximum (FWHM) of a microvessel was improved from 133µm to 35µm, which is smaller than the ultrasound wavelength (73µm). The proposed method was further tested in anin vivohuman liver data. Results showed the reconstructed super-resolved images could resolve a microvessel of nearly 170µm (FWHM). Adjacent microvessels with a distance of 670µm, which cannot be resolved with power Doppler imaging, can be well-separated with the proposed method. Improved contrast ratios using the proposed method were shown compared with that of the conventional deep-ULM method. Additionally, the processing time to reconstruct a high-resolution ultrasound frame with an image size of 1024 × 512 pixels was around 16 ms, comparable to state-of-the-art deep-ULMs.


Asunto(s)
Microvasos , Animales , Embrión de Pollo , Pollos , Procesamiento de Imagen Asistido por Computador , Microburbujas , Microscopía , Microvasos/diagnóstico por imagen , Redes Neurales de la Computación , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA