Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 195
Filtrar
1.
Animals (Basel) ; 14(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39272273

RESUMEN

Ovine pulmonary adenocarcinoma (OPA) is a contagious lung tumour caused by the Jaagsiekte Sheep Retrovirus (JSRV). Histopathological diagnosis is the gold standard for OPA diagnosis. However, interpretation of traditional pathology images is complex and operator dependent. The mask regional convolutional neural network (Mask R-CNN) has emerged as a valuable tool in pathological diagnosis. This study utilized 54 typical OPA whole slide images (WSI) to extract 7167 typical lesion images containing OPA to construct a Common Objects in Context (COCO) dataset for OPA pathological images. The dataset was categorized into training and test sets (8:2 ratio) for model training and validation. Mean average specificity (mASp) and average sensitivity (ASe) were used to evaluate model performance. Six WSI-level pathological images (three OPA and three non-OPA images), not included in the dataset, were used for anti-peeking model validation. A random selection of 500 images, not included in the dataset establishment, was used to compare the performance of the model with assessment by pathologists. Accuracy, sensitivity, specificity, and concordance rate were evaluated. The model achieved a mASp of 0.573 and an ASe of 0.745, demonstrating effective lesion detection and alignment with expert annotation. In Anti-Peeking verification, the model showed good performance in locating OPA lesions and distinguished OPA from non-OPA pathological images. In the random 500-image diagnosis, the model achieved 92.8% accuracy, 100% sensitivity, and 88% specificity. The agreement rates between junior and senior pathologists were 100% and 96.5%, respectively. In conclusion, the Mask R-CNN-based OPA diagnostic model developed for OPA facilitates rapid and accurate diagnosis in practical applications.

2.
Sci Total Environ ; 951: 175813, 2024 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-39191331

RESUMEN

Investigating the interaction between influent particles and biomass is basic and important for the biological wastewater treatment. The micro-level methods allow for this, such as the microscope image analysis method with the conventional ImageJ processing software. However, these methods are cost and time-consuming, and require a large amount of work on manual parameter tuning. To deal with this problem, we proposed a deep learning (DL) method to automatically detect and quantify microparticles free from biomass and entrapped in biomass from microscope images. Firstly, we introduced a "TU Delft-Interaction between Particles and Biomass" dataset containing labeled microscope images. Then, we built DL models using this dataset with seven state-of-the-art model architectures for a instance segmentation task, such as Mask R-CNN, Cascade Mask R-CNN, Yolact and YOLOv8. The results show that the Cascade Mask R-CNN with ResNet50 backbone achieves promising detection accuracy, with a mAP50box and mAP50mask of 90.6 % on the test set. Then, we benchmarked our results against the conventional ImageJ processing method. The results show that the DL method significantly outperforms the ImageJ processing method in terms of detection accuracy and processing cost. The DL method shows a 13.8 % improvement in micro-average precision, and a 21.7 % improvement in micro-average recall, compared to the ImageJ method. Moreover, the DL method can process 70 images within 1 min, while the ImageJ method costs at least 6 h. The promising performance of our method allows it to offer a potential alternative to examine the interaction between microparticles and biomass in biological wastewater treatment process in an affordable manner. This approach offers more useful insights into the treatment process, enabling further reveal the microparticles transfer in biological treatment systems.


Asunto(s)
Biomasa , Aprendizaje Profundo , Eliminación de Residuos Líquidos , Aguas Residuales , Eliminación de Residuos Líquidos/métodos
3.
Sci Rep ; 14(1): 19039, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152188

RESUMEN

Identification of high consequence areas is an important task in pipeline integrity management. However, traditional identification methods are generally characterized by low efficiency, high cost and low accuracy. For this reason, this paper proposes a recognition method based on the improved algorithm Mask Region-based Convolutional Neural Network. Coordinate attention mechanism module is introduced into the traditional Mask R-CNN algorithm to improve the recognition accuracy and reduce the training time. For the identification results, GIS tools are utilized to establish high consequence zones along both sides of the pipeline, and the grade and scope of the high consequence zones are determined according to relevant specifications.In this paper, this method is used to identify the high-consequence area of a pipeline section in Guangdong Province, the results show that: 1, the improved algorithm in the identification of densely populated, geologic hazards, flammable and explosive high consequence zones of the average accuracy of the identification of 1.7%, 3.4%, 3.9%. 2, The method in this paper identifies 8 more building elements and 0.311 more kilometers of pipeline mileage compared to traditional identification methods. The method of this paper can provide a reference for the early identification of high consequence areas of pipelines.

4.
PeerJ Comput Sci ; 10: e2158, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145199

RESUMEN

Gait recognition, a biometric identification method, has garnered significant attention due to its unique attributes, including non-invasiveness, long-distance capture, and resistance to impersonation. Gait recognition has undergone a revolution driven by the remarkable capacity of deep learning to extract complicated features from data. An overview of the current developments in deep learning-based gait identification methods is provided in this work. We explore and analyze the development of gait recognition and highlight its uses in forensics, security, and criminal investigations. The article delves into the challenges associated with gait recognition, such as variations in walking conditions, viewing angles, and clothing as well. We discuss about the effectiveness of deep neural networks in addressing these challenges by providing a comprehensive analysis of state-of-the-art architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and attention mechanisms. Diverse neural network-based gait recognition models, such as Gate Controlled and Shared Attention ICDNet (GA-ICDNet), Multi-Scale Temporal Feature Extractor (MSTFE), GaitNet, and various CNN-based approaches, demonstrate impressive accuracy across different walking conditions, showcasing the effectiveness of these models in capturing unique gait patterns. GaitNet achieved an exceptional identification accuracy of 99.7%, whereas GA-ICDNet showed high precision with an equal error rate of 0.67% in verification tasks. GaitGraph (ResGCN+2D CNN) achieved rank-1 accuracies ranging from 66.3% to 87.7%, whereas a Fully Connected Network with Koopman Operator achieved an average rank-1 accuracy of 74.7% for OU-MVLP across various conditions. However, GCPFP (GCN with Graph Convolution-Based Part Feature Polling) utilizing graph convolutional network (GCN) and GaitSet achieves the lowest average rank-1 accuracy of 62.4% for CASIA-B, while MFINet (Multiple Factor Inference Network) exhibits the lowest accuracy range of 11.72% to 19.32% under clothing variation conditions on CASIA-B. In addition to an across-the-board analysis of recent breakthroughs in gait recognition, the scope for potential future research direction is also assessed.

5.
Med Biol Eng Comput ; 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152359

RESUMEN

The magnetically controlled growing rod technique is an effective surgical treatment for children who have early-onset scoliosis. The length of the instrumented growing rods is adjusted regularly to compensate for the normal growth of these patients. Manual measurement of rod length on posteroanterior spine radiographs is subjective and time-consuming. A machine learning (ML) system using a deep learning approach was developed to automatically measure the adjusted rod length. Three ML models-rod model, 58 mm model, and head-piece model-were developed to extract the rod length from radiographs. Three-hundred and eighty-seven radiographs were used for model development, and 60 radiographs with 118 rods were separated for final testing. The average precision (AP), the mean absolute difference (MAD) ± standard deviation (SD), and the inter-method correlation coefficient (ICC[2,1]) between the manual and artificial intelligence (AI) adjustment measurements were used to evaluate the developed method. The AP of the 3 models were 67.6%, 94.8%, and 86.3%, respectively. The MAD ± SD of the rod length change was 0.98 ± 0.88 mm, and the ICC[2,1] was 0.90. The average time to output a single rod measurement was 6.1 s. The developed AI provided an accurate and reliable method to detect the rod length automatically.

6.
Diagnostics (Basel) ; 14(15)2024 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-39125563

RESUMEN

The severity of periodontitis can be analyzed by calculating the loss of alveolar crest (ALC) level and the level of bone loss between the tooth's bone and the cemento-enamel junction (CEJ). However, dentists need to manually mark symptoms on periapical radiographs (PAs) to assess bone loss, a process that is both time-consuming and prone to errors. This study proposes the following new method that contributes to the evaluation of disease and reduces errors. Firstly, innovative periodontitis image enhancement methods are employed to improve PA image quality. Subsequently, single teeth can be accurately extracted from PA images by object detection with a maximum accuracy of 97.01%. An instance segmentation developed in this study accurately extracts regions of interest, enabling the generation of masks for tooth bone and tooth crown with accuracies of 93.48% and 96.95%. Finally, a novel detection algorithm is proposed to automatically mark the CEJ and ALC of symptomatic teeth, facilitating faster accurate assessment of bone loss severity by dentists. The PA image database used in this study, with the IRB number 02002030B0 provided by Chang Gung Medical Center, Taiwan, significantly reduces the time required for dental diagnosis and enhances healthcare quality through the techniques developed in this research.

7.
Comput Biol Med ; 180: 108927, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39096608

RESUMEN

Rare genetic diseases are difficult to diagnose and this translates in patient's diagnostic odyssey! This is particularly true for more than 900 rare diseases including orodental developmental anomalies such as missing teeth. However, if left untreated, their symptoms can become significant and disabling for the patient. Early detection and rapid management are therefore essential in this context. The i-Dent project aims to supply a pre-diagnostic tool to detect rare diseases with tooth agenesis of varying severity and pattern. To identify missing teeth, image segmentation models (Mask R-CNN, U-Net) have been trained for the automatic detection of teeth on patients' panoramic dental X-rays. Teeth segmentation enables the identification of teeth which are present or missing within the mouth. Furthermore, a dental age assessment is conducted to verify whether the absence of teeth is an anomaly or a characteristic of the patient's age. Due to the small size of our dataset, we developed a new dental age assessment technique based on the tooth eruption rate. Information about missing teeth is then used by a final algorithm based on the agenesis probabilities to propose a pre-diagnosis of a rare disease. The results obtained in detecting three types of genes (PAX9, WNT10A and EDA) by our system are very promising, providing a pre-diagnosis with an average accuracy of 72 %.


Asunto(s)
Enfermedades Raras , Humanos , Enfermedades Raras/genética , Enfermedades Raras/diagnóstico por imagen , Niño , Masculino , Femenino , Radiografía Panorámica , Adolescente
8.
Environ Res ; 262(Pt 1): 119792, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39142455

RESUMEN

The functionality of activated sludge in wastewater treatment processes depends largely on the structural and microbial composition of its flocs, which are complex assemblages of microorganisms and their secretions. However, monitoring these flocs in real-time and consistently has been challenging due to the lack of suitable technologies and analytical methods. Here we present a laboratory setup capable of capturing instantaneous microscopic images of activated sludge, along with algorithms to interpret these images. To improve floc identification, an advanced Mask R-CNN-based segmentation that integrates a Dual Attention Network (DANet) with an enhanced Feature Pyramid Network (FPN) was used to enhance feature extraction and segmentation accuracy. Additionally, our novel PointRend module meticulously refines the contours of boundaries, significantly minimising pixel inaccuracies. Impressively, our approach achieved a floc detection accuracy of >95%. This development marks a significant advancement in real-time sludge monitoring, offering essential insights for optimising wastewater treatment operations proactively.

9.
Materials (Basel) ; 17(13)2024 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-38998430

RESUMEN

This study represents a significant advancement in structural health monitoring by integrating infrared thermography (IRT) with cutting-edge deep learning techniques, specifically through the use of the Mask R-CNN neural network. This approach targets the precise detection and segmentation of hidden defects within the interfacial layers of Fiber-Reinforced Polymer (FRP)-reinforced concrete structures. Employing a dual RGB and thermal camera setup, we captured and meticulously aligned image data, which were then annotated for semantic segmentation to train the deep learning model. The fusion of the RGB and thermal imaging significantly enhanced the model's capabilities, achieving an average accuracy of 96.28% across a 5-fold cross-validation. The model demonstrated robust performance, consistently identifying true negatives with an average specificity of 96.78% and maintaining high precision at 96.42% in accurately delineating damaged areas. It also showed a high recall rate of 96.91%, effectively recognizing almost all actual cases of damage, which is crucial for the maintenance of structural integrity. The balanced precision and recall culminated in an average F1-score of 96.78%, highlighting the model's effectiveness in comprehensive damage assessment. Overall, this synergistic approach of combining IRT and deep learning provides a powerful tool for the automated inspection and preservation of critical infrastructure components.

10.
Curr Med Imaging ; 20: e15734056305021, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38874030

RESUMEN

INTRODUCTION: The second highest cause of death among males is Prostate Cancer (PCa) in America. Over the globe, it's the usual case in men, and the annual PCa ratio is very surprising. Identical to other prognosis and diagnostic medical systems, deep learning-based automated recognition and detection systems (i.e., Computer Aided Detection (CAD) systems) have gained enormous attention in PCA. METHODS: These paradigms have attained promising results with a high segmentation, detection, and classification accuracy ratio. Numerous researchers claimed efficient results from deep learning-based approaches compared to other ordinary systems that utilized pathological samples. RESULTS: This research is intended to perform prostate segmentation using transfer learning-based Mask R-CNN, which is consequently helpful in prostate cancer detection. CONCLUSION: Lastly, limitations in current work, research findings, and prospects have been discussed.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Masculino , Imagen por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
11.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(3): 527-534, 2024 Jun 25.
Artículo en Chino | MEDLINE | ID: mdl-38932539

RESUMEN

There are some problems in positron emission tomography/ computed tomography (PET/CT) lung images, such as little information of feature pixels in lesion regions, complex and diverse shapes, and blurred boundaries between lesions and surrounding tissues, which lead to inadequate extraction of tumor lesion features by the model. To solve the above problems, this paper proposes a dense interactive feature fusion Mask RCNN (DIF-Mask RCNN) model. Firstly, a feature extraction network with cross-scale backbone and auxiliary structures was designed to extract the features of lesions at different scales. Then, a dense interactive feature enhancement network was designed to enhance the lesion detail information in the deep feature map by interactively fusing the shallowest lesion features with neighboring features and current features in the form of dense connections. Finally, a dense interactive feature fusion feature pyramid network (FPN) network was constructed, and the shallow information was added to the deep features one by one in the bottom-up path with dense connections to further enhance the model's perception of weak features in the lesion region. The ablation and comparison experiments were conducted on the clinical PET/CT lung image dataset. The results showed that the APdet, APseg, APdet_s and APseg_s indexes of the proposed model were 67.16%, 68.12%, 34.97% and 37.68%, respectively. Compared with Mask RCNN (ResNet50), APdet and APseg indexes increased by 7.11% and 5.14%, respectively. DIF-Mask RCNN model can effectively detect and segment tumor lesions. It provides important reference value and evaluation basis for computer-aided diagnosis of lung cancer.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Pulmón/diagnóstico por imagen , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
12.
Ultrasonics ; 142: 107350, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38823150

RESUMEN

Fingerprint authentication is widely used in various areas. While existing methods effectively extract and match fingerprint features, they encounter difficulties in detecting wet fingers and identifying false minutiae. In this paper, a fast fingerprint inversion and authentication method based on Lamb waves is developed by integrating deep learning and multi-scale fusion. This method speeds up the inversion performance through deep fast inversion tomography (DeepFIT) and uses Mask R-CNN to improve authentication accuracy. DeepFIT utilizes fully connected and convolutional operations to approach the descent gradient, enhancing the efficiency of ultrasonic array reconstruction. This suppresses artifacts and accelerates sub-millimeter-level fingerprint minutia inversion. By identifying the overall morphological relationships of various minutia in fingerprints, meaningful minutia representing individual identities are extracted by the Mask R-CNN method. It segments and matches multi-scale fingerprint features, improving the reliability of authentication results. Results indicate that the proposed method has high accuracy, robustness, and speed, optimizing the entire fingerprint authentication process.

13.
Data Brief ; 54: 110537, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38882193

RESUMEN

The exploration of ground-dwelling nocturnal fauna represents a significant challenge due to its broad implications across various sectors, including pesticide management, crop yield forecasting, and plant disease identification. This paper unveils an annotated dataset, BioAuxdataset, aimed at facilitating the recognition of such fauna through field images gathered across multiple years. Culled from a collection exceeding 100,000 raw field images over a span of four years, this meticulously curated dataset features seven prevalent species of nocturnal ground-dwelling fauna: carabid, mouse, opilion, slug, shrew, small-slug, and worm. In instances of underrepresented species within the dataset, we have implemented straightforward yet potent image augmentation techniques to enhance data quality. BioAuxdataset stands as a valuable resource for the detection and identification of these organisms, leveraging the power of deep learning algorithms to unlock new potentials in ecological research and beyond. This dataset not only enriches the academic discourse but also opens up avenues for practical applications in agriculture, environmental science, and biodiversity conservation.

14.
Artículo en Inglés | MEDLINE | ID: mdl-38848032

RESUMEN

PURPOSE: In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances. METHODS: In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task. RESULTS: To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module. CONCLUSION: This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.

15.
Diagnostics (Basel) ; 14(11)2024 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-38893629

RESUMEN

Pulmonary embolism (PE) refers to the occlusion of pulmonary arteries by blood clots, posing a mortality risk of approximately 30%. The detection of pulmonary embolism within segmental arteries presents greater challenges compared with larger arteries and is frequently overlooked. In this study, we developed a computational method to automatically identify pulmonary embolism within segmental arteries using computed tomography (CT) images. The system architecture incorporates an enhanced Mask R-CNN deep neural network trained on PE-containing images. This network accurately localizes pulmonary embolisms in CT images and effectively delineates their boundaries. This study involved creating a local data set and evaluating the model predictions against pulmonary embolisms manually identified by expert radiologists. The sensitivity, specificity, accuracy, Dice coefficient, and Jaccard index values were obtained as 96.2%, 93.4%, 96.%, 0.95, and 0.89, respectively. The enhanced Mask R-CNN model outperformed the traditional Mask R-CNN and U-Net models. This study underscores the influence of Mask R-CNN's loss function on model performance, providing a basis for the potential improvement of Mask R-CNN models for object detection and segmentation tasks in CT images.

16.
Sci Rep ; 14(1): 14210, 2024 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-38902285

RESUMEN

Regular screening for cervical cancer is one of the best tools to reduce cancer incidence. Automated cell segmentation in screening is an essential task because it can present better understanding of the characteristics of cervical cells. The main challenge of cell cytoplasm segmentation is that many boundaries in cell clumps are extremely difficult to be identified. This paper proposes a new convolutional neural network based on Mask RCNN and PointRend module, to segment overlapping cervical cells. The PointRend head concatenates fine grained features and coarse features extracted from different feature maps to fine-tune the candidate boundary pixels of cell cytoplasm, which are crucial for precise cell segmentation. The proposed model achieves a 0.97 DSC (Dice Similarity Coefficient), 0.96 TPRp (Pixelwise True Positive Rate), 0.007 FPRp (Pixelwise False Positive Rate) and 0.006 FNRo (Object False Negative Rate) on dataset from ISBI2014. Specially, the proposed method outperforms state-of-the-art result by about 3 % on DSC, 1 % on TPRp and 1.4 % on FNRo respectively. The performance metrics of our model on dataset from ISBI2015 are slight better than the average value of other approaches. Those results indicate that the proposed method could be effective in cytological analysis and then help experts correctly discover cervical cell lesions.


Asunto(s)
Cuello del Útero , Redes Neurales de la Computación , Neoplasias del Cuello Uterino , Humanos , Femenino , Neoplasias del Cuello Uterino/patología , Neoplasias del Cuello Uterino/diagnóstico , Cuello del Útero/patología , Cuello del Útero/diagnóstico por imagen , Cuello del Útero/citología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Detección Precoz del Cáncer/métodos
17.
Sci Rep ; 14(1): 10016, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38693219

RESUMEN

Agricultural dykelands in Nova Scotia rely heavily on a surface drainage technique called land forming, which is used to alter the topography of fields to improve drainage. The presence of land-formed fields provides useful information to better understand land utilization on these lands vulnerable to rising sea levels. Current field boundaries delineation and classification methods, such as manual digitalization and traditional segmentation techniques, are labour-intensive and often require manual and time-consuming parameter selection. In recent years, deep learning (DL) techniques, including convolutional neural networks and Mask R-CNN, have shown promising results in object recognition, image classification, and segmentation tasks. However, there is a gap in applying these techniques to detecting surface drainage patterns on agricultural fields. This paper develops and tests a Mask R-CNN model for detecting land-formed fields on agricultural dykelands using LiDAR-derived elevation data. Specifically, our approach focuses on identifying groups of pixels as cohesive objects within the imagery, a method that represents a significant advancement over pixel-by-pixel classification techniques. The DL model developed in this study demonstrated a strong overall performance, with a mean Average Precision (mAP) of 0.89 across Intersection over Union (IoU) thresholds from 0.5 to 0.95, indicating its effectiveness in detecting land-formed fields. Results also revealed that 53% of Nova Scotia's dykelands are being used for agricultural purposes and approximately 75% (6924 hectares) of these fields were land-formed. By applying deep learning techniques to LiDAR-derived elevation data, this study offers novel insights into surface drainage mapping, enhancing the capability for precise and efficient agricultural land management in regions vulnerable to environmental changes.

18.
Sci Rep ; 14(1): 10866, 2024 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740920

RESUMEN

The presence of Arbuscular Mycorrhizal Fungi (AMF) in vascular land plant roots is one of the most ancient of symbioses supporting nitrogen and phosphorus exchange for photosynthetically derived carbon. Here we provide a multi-scale modeling approach to predict AMF colonization of a worldwide crop from a Recombinant Inbred Line (RIL) population derived from Sorghum bicolor and S. propinquum. The high-throughput phenotyping methods of fungal structures here rely on a Mask Region-based Convolutional Neural Network (Mask R-CNN) in computer vision for pixel-wise fungal structure segmentations and mixed linear models to explore the relations of AMF colonization, root niche, and fungal structure allocation. Models proposed capture over 95% of the variation in AMF colonization as a function of root niche and relative abundance of fungal structures in each plant. Arbuscule allocation is a significant predictor of AMF colonization among sibling plants. Arbuscules and extraradical hyphae implicated in nutrient exchange predict highest AMF colonization in the top root section. Our work demonstrates that deep learning can be used by the community for the high-throughput phenotyping of AMF in plant roots. Mixed linear modeling provides a framework for testing hypotheses about AMF colonization phenotypes as a function of root niche and fungal structure allocations.


Asunto(s)
Micorrizas , Raíces de Plantas , Sorghum , Micorrizas/fisiología , Raíces de Plantas/microbiología , Sorghum/microbiología , Modelos Lineales , Simbiosis , Redes Neurales de la Computación
19.
J Appl Crystallogr ; 57(Pt 2): 266-275, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38596734

RESUMEN

In cellulo crystallization is a rare event in nature. Recent advances that have made use of heterologous overexpression can promote the intracellular formation of protein crystals, but new tools are required to detect and characterize these targets in the complex cell environment. The present work makes use of Mask R-CNN, a convolutional neural network (CNN)-based instance segmentation method, for the identification of either single or multi-shaped crystals growing in living insect cells, using conventional bright field images. The algorithm can be rapidly adapted to recognize different targets, with the aim of extracting relevant information to support a semi-automated screening pipeline, in order to aid the development of the intracellular protein crystallization approach.

20.
Front Plant Sci ; 15: 1340584, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38601300

RESUMEN

Introduction: Asian soybean rust is a highly aggressive leaf-based disease triggered by the obligate biotrophic fungus Phakopsora pachyrhizi which can cause up to 80% yield loss in soybean. The precise image segmentation of fungus can characterize fungal phenotype transitions during growth and help to discover new medicines and agricultural biocides using large-scale phenotypic screens. Methods: The improved Mask R-CNN method is proposed to accomplish the segmentation of densely distributed, overlapping and intersecting microimages. First, Res2net is utilized to layer the residual connections in a single residual block to replace the backbone of the original Mask R-CNN, which is then combined with FPG to enhance the feature extraction capability of the network model. Secondly, the loss function is optimized and the CIoU loss function is adopted as the loss function for boundary box regression prediction, which accelerates the convergence speed of the model and meets the accurate classification of high-density spore images. Results: The experimental results show that the mAP for detection and segmentation, accuracy of the improved algorithm is improved by 6.4%, 12.3% and 2.2% respectively over the original Mask R-CNN algorithm. Discussion: This method is more suitable for the segmentation of fungi images and provide an effective tool for large-scale phenotypic screens of plant fungal pathogens.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA