Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.378
Filtrar
1.
Am J Physiol Heart Circ Physiol ; 327(3): H715-H721, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39092999

RESUMEN

GelBox is open-source software that was developed with the goal of enhancing rigor, reproducibility, and transparency when analyzing gels and immunoblots. It combines image adjustments (cropping, rotation, brightness, and contrast), background correction, and band-fitting in a single application. Users can also associate each lane in an image with metadata (for example, sample type). GelBox data files integrate the raw data, supplied metadata, image adjustments, and band-level analyses in a single file to improve traceability. GelBox has a user-friendly interface and was developed using MATLAB. The software, installation instructions, and tutorials, are available at https://campbell-muscle-lab.github.io/GelBox/.NEW & NOTEWORTHY GelBox is open-source software that was developed to enhance rigor, reproducibility, and transparency when analyzing gels and immunoblots. It combines image adjustments (cropping, rotation, brightness, and contrast), background correction, and band-fitting in a single application. Users can also associate each lane in an image with metadata (for example, sample type).


Asunto(s)
Programas Informáticos , Reproducibilidad de los Resultados , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Animales
2.
Hum Brain Mapp ; 45(12): e70003, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39185668

RESUMEN

Computationally expensive data processing in neuroimaging research places demands on energy consumption-and the resulting carbon emissions contribute to the climate crisis. We measured the carbon footprint of the functional magnetic resonance imaging (fMRI) preprocessing tool fMRIPrep, testing the effect of varying parameters on estimated carbon emissions and preprocessing performance. Performance was quantified using (a) statistical individual-level task activation in regions of interest and (b) mean smoothness of preprocessed data. Eight variants of fMRIPrep were run with 257 participants who had completed an fMRI stop signal task (the same data also used in the original validation of fMRIPrep). Some variants led to substantial reductions in carbon emissions without sacrificing data quality: for instance, disabling FreeSurfer surface reconstruction reduced carbon emissions by 48%. We provide six recommendations for minimising emissions without compromising performance. By varying parameters and computational resources, neuroimagers can substantially reduce the carbon footprint of their preprocessing. This is one aspect of our research carbon footprint over which neuroimagers have control and agency to act upon.


Asunto(s)
Encéfalo , Huella de Carbono , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Adulto Joven , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
3.
Hum Brain Mapp ; 45(10): e26778, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38980175

RESUMEN

Brain activity continuously fluctuates over time, even if the brain is in controlled (e.g., experimentally induced) states. Recent years have seen an increasing interest in understanding the complexity of these temporal variations, for example with respect to developmental changes in brain function or between-person differences in healthy and clinical populations. However, the psychometric reliability of brain signal variability and complexity measures-which is an important precondition for robust individual differences as well as longitudinal research-is not yet sufficiently studied. We examined reliability (split-half correlations) and test-retest correlations for task-free (resting-state) BOLD fMRI as well as split-half correlations for seven functional task data sets from the Human Connectome Project to evaluate their reliability. We observed good to excellent split-half reliability for temporal variability measures derived from rest and task fMRI activation time series (standard deviation, mean absolute successive difference, mean squared successive difference), and moderate test-retest correlations for the same variability measures under rest conditions. Brain signal complexity estimates (several entropy and dimensionality measures) showed moderate to good reliabilities under both, rest and task activation conditions. We calculated the same measures also for time-resolved (dynamic) functional connectivity time series and observed moderate to good reliabilities for variability measures, but poor reliabilities for complexity measures derived from functional connectivity time series. Global (i.e., mean across cortical regions) measures tended to show higher reliability than region-specific variability or complexity estimates. Larger subcortical regions showed similar reliability as cortical regions, but small regions showed lower reliability, especially for complexity measures. Lastly, we also show that reliability scores are only minorly dependent on differences in scan length and replicate our results across different parcellation and denoising strategies. These results suggest that the variability and complexity of BOLD activation time series are robust measures well-suited for individual differences research. Temporal variability of global functional connectivity over time provides an important novel approach to robustly quantifying the dynamics of brain function. PRACTITIONER POINTS: Variability and complexity measures of BOLD activation show good split-half reliability and moderate test-retest reliability. Measures of variability of global functional connectivity over time can robustly quantify neural dynamics. Length of fMRI data has only a minor effect on reliability.


Asunto(s)
Encéfalo , Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Conectoma/normas , Conectoma/métodos , Oxígeno/sangre , Masculino , Femenino , Descanso/fisiología , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
4.
Hum Brain Mapp ; 45(11): e26708, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39056477

RESUMEN

Neuroimaging data acquired using multiple scanners or protocols are increasingly available. However, such data exhibit technical artifacts across batches which introduce confounding and decrease reproducibility. This is especially true when multi-batch data are analyzed using complex downstream models which are more likely to pick up on and implicitly incorporate batch-related information. Previously proposed image harmonization methods have sought to remove these batch effects; however, batch effects remain detectable in the data after applying these methods. We present DeepComBat, a deep learning harmonization method based on a conditional variational autoencoder and the ComBat method. DeepComBat combines the strengths of statistical and deep learning methods in order to account for the multivariate relationships between features while simultaneously relaxing strong assumptions made by previous deep learning harmonization methods. As a result, DeepComBat can perform multivariate harmonization while preserving data structure and avoiding the introduction of synthetic artifacts. We apply this method to cortical thickness measurements from a cognitive-aging cohort and show DeepComBat qualitatively and quantitatively outperforms existing methods in removing batch effects while preserving biological heterogeneity. Additionally, DeepComBat provides a new perspective for statistically motivated deep learning harmonization methods.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Neuroimagen , Humanos , Neuroimagen/métodos , Neuroimagen/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Corteza Cerebral/diagnóstico por imagen , Anciano , Masculino , Femenino
5.
Neuroimage ; 297: 120697, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38908725

RESUMEN

Quantitative susceptibility mapping (QSM) is a rising MRI-based technology and quite a few QSM-related algorithms have been proposed to reconstruct maps of tissue susceptibility distribution from phase images. In this paper, we develop a comprehensive susceptibility imaging process and analysis studio (SIPAS) that can accomplish reliable QSM processing and offer a standardized evaluation system. Specifically, SIPAS integrates multiple methods for each step, enabling users to select algorithm combinations according to data conditions, and QSM maps could be evaluated by two aspects, including image quality indicators within all voxels and region-of-interest (ROI) analysis. Through a sophisticated design of user-friendly interfaces, the results of each procedure are able to be exhibited in axial, coronal, and sagittal views in real-time, meanwhile ROIs can be displayed in 3D rendering visualization. The accuracy and compatibility of SIPAS are demonstrated by experiments on multiple in vivo human brain datasets acquired from 3T, 5T, and 7T MRI scanners of different manufacturers. We also validate the QSM maps obtained by various algorithm combinations in SIPAS, among which the combination of iRSHARP and SFCR achieves the best results on its evaluation system. SIPAS is a comprehensive, sophisticated, and reliable toolkit that may prompt the QSM application in scientific research and clinical practice.


Asunto(s)
Algoritmos , Encéfalo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Programas Informáticos
6.
Transl Vis Sci Technol ; 13(6): 16, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38904611

RESUMEN

Purpose: This study enhances Meibomian gland (MG) infrared image analysis in dry eye (DE) research through artificial intelligence (AI). It is comprised of two main stages: automated eyelid detection and tarsal plate segmentation to standardize meibography image analysis. The goal is to address limitations of existing assessment methods, bridge the curated and real-world dataset gap, and standardize MG image analysis. Methods: The approach involves a two-stage process: automated eyelid detection and tarsal plate segmentation. In the first stage, an AI model trained on curated data identifies relevant eyelid areas in non-curated datasets. The second stage refines the eyelid area in meibography images, enabling precise comparisons between normal and DE subjects. This approach also includes specular reflection removal and tarsal plate mask refinement. Results: The methodology achieved a promising instance-wise accuracy of 80.8% for distinguishing meibography images from 399 DE and 235 non-DE subjects. By integrating diverse datasets and refining the area of interest, this approach enhances meibography feature extraction accuracy. Dimension reduction through Uniform Manifold Approximation and Projection (UMAP) allows feature visualization, revealing distinct clusters for DE and non-DE phenotypes. Conclusions: The AI-driven methodology presented here quantifies and classifies meibography image features and standardizes the analysis process. By bootstrapping the model from curated datasets, this methodology addresses real-world dataset challenges to enhance the accuracy of meibography image feature extraction. Translational Relevance: The study presents a standardized method for meibography image analysis. This method could serve as a valuable tool in facilitating more targeted investigations into MG characteristics.


Asunto(s)
Inteligencia Artificial , Síndromes de Ojo Seco , Glándulas Tarsales , Humanos , Síndromes de Ojo Seco/diagnóstico por imagen , Glándulas Tarsales/diagnóstico por imagen , Femenino , Masculino , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Adulto , Técnicas de Diagnóstico Oftalmológico/normas , Anciano , Rayos Infrarrojos
7.
Hum Brain Mapp ; 45(9): e26721, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38899549

RESUMEN

With the rise of open data, identifiability of individuals based on 3D renderings obtained from routine structural magnetic resonance imaging (MRI) scans of the head has become a growing privacy concern. To protect subject privacy, several algorithms have been developed to de-identify imaging data using blurring, defacing or refacing. Completely removing facial structures provides the best re-identification protection but can significantly impact post-processing steps, like brain morphometry. As an alternative, refacing methods that replace individual facial structures with generic templates have a lower effect on the geometry and intensity distribution of original scans, and are able to provide more consistent post-processing results by the price of higher re-identification risk and computational complexity. In the current study, we propose a novel method for anonymized face generation for defaced 3D T1-weighted scans based on a 3D conditional generative adversarial network. To evaluate the performance of the proposed de-identification tool, a comparative study was conducted between several existing defacing and refacing tools, with two different segmentation algorithms (FAST and Morphobox). The aim was to evaluate (i) impact on brain morphometry reproducibility, (ii) re-identification risk, (iii) balance between (i) and (ii), and (iv) the processing time. The proposed method takes 9 s for face generation and is suitable for recovering consistent post-processing results after defacing.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Masculino , Femenino , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Neuroimagen/normas , Anonimización de la Información , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos
8.
Neuroinformatics ; 22(3): 297-315, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38861098

RESUMEN

Pooling data across diverse sources acquired by multisite consortia requires compliance with a predefined reference protocol i.e., ensuring different sites and scanners for a given project have used identical or compatible MR physics parameter values. Traditionally, this has been an arduous and manual process due to difficulties in working with the complicated DICOM standard and lack of resources allocated towards protocol compliance. Moreover, issues of protocol compliance is often overlooked for lack of realization that parameter values are routinely improvised/modified locally at various sites. The inconsistencies in acquisition protocols can reduce SNR, statistical power, and in the worst case, may invalidate the results altogether. An open-source tool, mrQA was developed to automatically assess protocol compliance on standard dataset formats such as DICOM and BIDS, and to study the patterns of non-compliance in over 20 open neuroimaging datasets, including the large ABCD study. The results demonstrate that the lack of compliance is rather pervasive. The frequent sources of non-compliance include but are not limited to deviations in Repetition Time, Echo Time, Flip Angle, and Phase Encoding Direction. It was also observed that GE and Philips scanners exhibited higher rates of non-compliance relative to the Siemens scanners in the ABCD dataset. Continuous monitoring for protocol compliance is strongly recommended before any pre/post-processing, ideally right after the acquisition, to avoid the silent propagation of severe/subtle issues. Although, this study focuses on neuroimaging datasets, the proposed tool mrQA can work with any DICOM-based datasets.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Programas Informáticos/normas , Adhesión a Directriz/estadística & datos numéricos , Adhesión a Directriz/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Encéfalo/diagnóstico por imagen
9.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38712767

RESUMEN

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Asunto(s)
Corteza Cerebral , Imagen por Resonancia Magnética , Esquizofrenia , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Esquizofrenia/diagnóstico por imagen , Esquizofrenia/patología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/anatomía & histología , Neuroimagen/métodos , Neuroimagen/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Masculino , Femenino , Adulto , Distribución Normal , Grosor de la Corteza Cerebral
10.
Neuroinformatics ; 22(3): 269-283, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38763990

RESUMEN

Magnetic resonance imaging of the brain is a useful tool in both the clinic and research settings, aiding in the diagnosis and treatments of neurological disease and expanding our knowledge of the brain. However, there are many challenges inherent in managing and analyzing MRI data, due in large part to the heterogeneity of data acquisition. To address this, we have developed MRIO, the Magnetic Resonance Imaging Acquisition and Analysis Ontology. MRIO provides well-reasoned classes and logical axioms for the acquisition of several MRI acquisition types and well-known, peer-reviewed analysis software, facilitating the use of MRI data. These classes provide a common language for the neuroimaging research process and help standardize the organization and analysis of MRI data for reproducible datasets. We also provide queries for automated assignment of analyses for given MRI types. MRIO aids researchers in managing neuroimaging studies by helping organize and annotate MRI data and integrating with existing standards such as Digital Imaging and Communications in Medicine and the Brain Imaging Data Structure, enhancing reproducibility and interoperability. MRIO was constructed according to Open Biomedical Ontologies Foundry principles and has contributed several classes to the Ontology for Biomedical Investigations to help bridge neuroimaging data to other domains. MRIO addresses the need for a "common language" for MRI that can help manage the neuroimaging research, by enabling researchers to identify appropriate analyses for sets of scans and facilitating data organization and reporting.


Asunto(s)
Ontologías Biológicas , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Programas Informáticos/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Neuroimagen/métodos , Neuroimagen/normas , Bases de Datos Factuales/normas
11.
Brain Topogr ; 37(5): 684-698, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38568279

RESUMEN

While 7T diffusion magnetic resonance imaging (dMRI) has high spatial resolution, its diffusion imaging quality is usually affected by signal loss due to B1 inhomogeneity, T2 decay, susceptibility, and chemical shift. In contrast, 3T dMRI has relative higher diffusion angular resolution, but lower spatial resolution. Combination of 3T and 7T dMRI, thus, may provide more detailed and accurate information about the voxel-wise fiber orientations to better understand the structural brain connectivity. However, this topic has not yet been thoroughly explored until now. In this study, we explored the feasibility of fusing 3T and 7T dMRI data to extract voxel-wise quantitative parameters at higher spatial resolution. After 3T and 7T dMRI data was preprocessed, respectively, 3T dMRI volumes were coregistered into 7T dMRI space. Then, 7T dMRI data was harmonized to the coregistered 3T dMRI B0 (b = 0) images. Last, harmonized 7T dMRI data was fused with 3T dMRI data according to four fusion rules proposed in this study. We employed high-quality 3T and 7T dMRI datasets (N = 24) from the Human Connectome Project to test our algorithms. The diffusion tensors (DTs) and orientation distribution functions (ODFs) estimated from the 3T-7T fused dMRI volumes were statistically analyzed. More voxels containing multiple fiber populations were found from the fused dMRI data than from 7T dMRI data set. Moreover, extra fiber directions were extracted in temporal brain regions from the fused dMRI data at Otsu's thresholds of quantitative anisotropy, but could not be extracted from 7T dMRI dataset. This study provides novel algorithms to fuse intra-subject 3T and 7T dMRI data for extracting more detailed information of voxel-wise quantitative parameters, and a new perspective to build more accurate structural brain networks.


Asunto(s)
Encéfalo , Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador , Humanos , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Femenino , Adulto , Imagen de Difusión Tensora/métodos , Imagen de Difusión Tensora/normas , Adulto Joven
12.
Hippocampus ; 34(6): 302-308, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38593279

RESUMEN

Researchers who study the human hippocampus are naturally interested in how its subfields function. However, many researchers are precluded from examining subfields because their manual delineation from magnetic resonance imaging (MRI) scans (still the gold standard approach) is time consuming and requires significant expertise. To help ameliorate this issue, we present here two protocols, one for 3T MRI and the other for 7T MRI, that permit automated hippocampus segmentation into six subregions, namely dentate gyrus/cornu ammonis (CA)4, CA2/3, CA1, subiculum, pre/parasubiculum, and uncus along the entire length of the hippocampus. These protocols are particularly notable relative to existing resources in that they were trained and tested using large numbers of healthy young adults (n = 140 at 3T, n = 40 at 7T) whose hippocampi were manually segmented by experts from MRI scans. Using inter-rater reliability analyses, we showed that the quality of automated segmentations produced by these protocols was high and comparable to expert manual segmenters. We provide full open access to the automated protocols, and anticipate they will save hippocampus researchers a significant amount of time. They could also help to catalyze subfield research, which is essential for gaining a full understanding of how the hippocampus functions.


Asunto(s)
Hipocampo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Hipocampo/diagnóstico por imagen , Masculino , Adulto , Femenino , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Reproducibilidad de los Resultados
13.
Neuroimage ; 292: 120617, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38636639

RESUMEN

A primary challenge to the data-driven analysis is the balance between poor generalizability of population-based research and characterizing more subject-, study- and population-specific variability. We previously introduced a fully automated spatially constrained independent component analysis (ICA) framework called NeuroMark and its functional MRI (fMRI) template. NeuroMark has been successfully applied in numerous studies, identifying brain markers reproducible across datasets and disorders. The first NeuroMark template was constructed based on young adult cohorts. We recently expanded on this initiative by creating a standardized normative multi-spatial-scale functional template using over 100,000 subjects, aiming to improve generalizability and comparability across studies involving diverse cohorts. While a unified template across the lifespan is desirable, a comprehensive investigation of the similarities and differences between components from different age populations might help systematically transform our understanding of the human brain by revealing the most well-replicated and variable network features throughout the lifespan. In this work, we introduced two significant expansions of NeuroMark templates first by generating replicable fMRI templates for infants, adolescents, and aging cohorts, and second by incorporating structural MRI (sMRI) and diffusion MRI (dMRI) modalities. Specifically, we built spatiotemporal fMRI templates based on 6,000 resting-state scans from four datasets. This is the first attempt to create robust ICA templates covering dynamic brain development across the lifespan. For the sMRI and dMRI data, we used two large publicly available datasets including more than 30,000 scans to build reliable templates. We employed a spatial similarity analysis to identify replicable templates and investigate the degree to which unique and similar patterns are reflective in different age populations. Our results suggest remarkably high similarity of the resulting adapted components, even across extreme age differences. With the new templates, the NeuroMark framework allows us to perform age-specific adaptations and to capture features adaptable to each modality, therefore facilitating biomarker identification across brain disorders. In sum, the present work demonstrates the generalizability of NeuroMark templates and suggests the potential of new templates to boost accuracy in mental health research and advance our understanding of lifespan and cross-modal alterations.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Adulto , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Adolescente , Adulto Joven , Masculino , Anciano , Femenino , Persona de Mediana Edad , Lactante , Niño , Envejecimiento/fisiología , Preescolar , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Anciano de 80 o más Años , Neuroimagen/métodos , Neuroimagen/normas , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas
14.
J Neurosci Methods ; 406: 110112, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38508496

RESUMEN

BACKGROUND: Visualizing edges is critical for neuroimaging. For example, edge maps enable quality assurance for the automatic alignment of an image from one modality (or individual) to another. NEW METHOD: We suggest that using the second derivative (difference of Gaussian, or DoG) provides robust edge detection. This method is tuned by size (which is typically known in neuroimaging) rather than intensity (which is relative). RESULTS: We demonstrate that this method performs well across a broad range of imaging modalities. The edge contours produced consistently form closed surfaces, whereas alternative methods may generate disconnected lines, introducing potential ambiguity in contiguity. COMPARISON WITH EXISTING METHODS: Current methods for computing edges are based on either the first derivative of the image (FSL), or a variation of the Canny Edge detection method (AFNI). These methods suffer from two primary limitations. First, the crucial tuning parameter for each of these methods relates to the image intensity. Unfortunately, image intensity is relative for most neuroimaging modalities making the performance of these methods unreliable. Second, these existing approaches do not necessarily generate a closed edge/surface, which can reduce the ability to determine the correspondence between a represented edge and another image. CONCLUSION: The second derivative is well suited for neuroimaging edge detection. We include this method as part of both the AFNI and FSL software packages, standalone code and online.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagenología Tridimensional/normas , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Neuroimagen/métodos , Neuroimagen/normas
15.
Neuroimage Clin ; 42: 103585, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38531165

RESUMEN

Resting state functional magnetic resonance imaging (rsfMRI) provides researchers and clinicians with a powerful tool to examine functional connectivity across large-scale brain networks, with ever-increasing applications to the study of neurological disorders, such as traumatic brain injury (TBI). While rsfMRI holds unparalleled promise in systems neurosciences, its acquisition and analytical methodology across research groups is variable, resulting in a literature that is challenging to integrate and interpret. The focus of this narrative review is to address the primary methodological issues including investigator decision points in the application of rsfMRI to study the consequences of TBI. As part of the ENIGMA Brain Injury working group, we have collaborated to identify a minimum set of recommendations that are designed to produce results that are reliable, harmonizable, and reproducible for the TBI imaging research community. Part one of this review provides the results of a literature search of current rsfMRI studies of TBI, highlighting key design considerations and data processing pipelines. Part two outlines seven data acquisition, processing, and analysis recommendations with the goal of maximizing study reliability and between-site comparability, while preserving investigator autonomy. Part three summarizes new directions and opportunities for future rsfMRI studies in TBI patients. The goal is to galvanize the TBI community to gain consensus for a set of rigorous and reproducible methods, and to increase analytical transparency and data sharing to address the reproducibility crisis in the field.


Asunto(s)
Lesiones Traumáticas del Encéfalo , Imagen por Resonancia Magnética , Humanos , Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Lesiones Traumáticas del Encéfalo/fisiopatología , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Reproducibilidad de los Resultados , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Descanso/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
16.
Plant Physiol ; 195(1): 378-394, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38298139

RESUMEN

Automated guard cell detection and measurement are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Most current methods for measuring guard cells and stomata are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing geometrical, mathematical algorithms, and convolutional neural networks to automatically detect, count, and measure over 30 guard cell and stomatal metrics, including guard cell and stomatal area, length, width, stomatal aperture area/guard cell area, orientation, stomatal evenness, divergence, and aggregation index. Combined with leaf functional traits, some of these StoManager1-measured guard cell and stomatal metrics explained 90% and 82% of tree biomass and intrinsic water use efficiency (iWUE) variances in hardwoods, making them substantial factors in leaf physiology and tree growth. StoManager1 demonstrated exceptional precision and recall (mAP@0.5 over 0.96), effectively capturing diverse stomatal properties across over 100 species. StoManager1 facilitates the automation of measuring leaf stomatal and guard cells, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance predictions of plant growth and resource usage worldwide. Easily accessible open-source code and standalone Windows executable applications are available on a GitHub repository (https://github.com/JiaxinWang123/StoManager1) and Zenodo (https://doi.org/10.5281/zenodo.7686022).


Asunto(s)
Botánica , Biología Celular , Células Vegetales , Estomas de Plantas , Programas Informáticos , Estomas de Plantas/citología , Estomas de Plantas/crecimiento & desarrollo , Células Vegetales/fisiología , Botánica/instrumentación , Botánica/métodos , Biología Celular/instrumentación , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos , Hojas de la Planta/citología , Redes Neurales de la Computación , Ensayos Analíticos de Alto Rendimiento/instrumentación , Ensayos Analíticos de Alto Rendimiento/métodos , Ensayos Analíticos de Alto Rendimiento/normas , Programas Informáticos/normas
18.
IEEE J Biomed Health Inform ; 27(8): 3912-3923, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37155391

RESUMEN

Semi-supervised learning is becoming an effective solution in medical image segmentation because annotations are costly and tedious to acquire. Methods based on the teacher-student model use consistency regularization and uncertainty estimation and have shown good potential in dealing with limited annotated data. Nevertheless, the existing teacher-student model is seriously limited by the exponential moving average algorithm, which leads to the optimization trap. Moreover, the classic uncertainty estimation method calculates the global uncertainty for images but does not consider local region-level uncertainty, which is unsuitable for medical images with blurry regions. In this article, the Voxel Stability and Reliability Constraint (VSRC) model is proposed to address these issues. Specifically, the Voxel Stability Constraint (VSC) strategy is introduced to optimize parameters and exchange effective knowledge between two independent initialized models, which can break through the performance bottleneck and avoid model collapse. Moreover, a new uncertainty estimation strategy, the Voxel Reliability Constraint (VRC), is proposed for use in our semi-supervised model to consider the uncertainty at the local region level. We further extend our model to auxiliary tasks and propose a task-level consistency regularization with uncertainty estimation. Extensive experiments on two 3D medical image datasets demonstrate that our method outperforms other state-of-the-art semi-supervised medical image segmentation methods under limited supervision.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado , Algoritmos , Conjuntos de Datos como Asunto , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Reproducibilidad de los Resultados , Estudiantes , Enseñanza , Incertidumbre , Humanos
19.
Blood Adv ; 7(16): 4621-4630, 2023 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-37146262

RESUMEN

Examination of red blood cell (RBC) morphology in peripheral blood smears can help diagnose hematologic diseases, even in resource-limited settings, but this analysis remains subjective and semiquantitative with low throughput. Prior attempts to develop automated tools have been hampered by their poor reproducibility and limited clinical validation. Here, we present a novel, open-source machine-learning approach (denoted as RBC-diff) to quantify abnormal RBCs in peripheral smear images and generate an RBC morphology differential. RBC-diff cell counts showed high accuracy for single-cell classification (mean AUC, 0.93) and quantitation across smears (mean R2, 0.76 compared with experts, interexperts R2, 0.75). RBC-diff counts were concordant with the clinical morphology grading for 300 000+ images and recovered the expected pathophysiologic signals in diverse clinical cohorts. Criteria using RBC-diff counts distinguished thrombotic thrombocytopenic purpura and hemolytic uremic syndrome from other thrombotic microangiopathies, providing greater specificity than clinical morphology grading (72% vs 41%; P < .001) while maintaining high sensitivity (94% to 100%). Elevated RBC-diff schistocyte counts were associated with increased 6-month all-cause mortality in a cohort of 58 950 inpatients (9.5% mortality for schist. >1%, vs 4.7% for schist; <0.5%; P < .001) after controlling for comorbidities, demographics, clinical morphology grading, and blood count indices. RBC-diff also enabled the estimation of single-cell volume-morphology distributions, providing insight into the influence of morphology on routine blood count measures. Our codebase and expert-annotated images are included here to spur further advancement. These results illustrate that computer vision can enable rapid and accurate quantitation of RBC morphology, which may provide value in both clinical and research contexts.


Asunto(s)
Eritrocitos Anormales , Enfermedades Hematológicas , Procesamiento de Imagen Asistido por Computador , Humanos , Eritrocitos Anormales/citología , Enfermedades Hematológicas/diagnóstico por imagen , Enfermedades Hematológicas/patología , Pronóstico , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje Automático , Forma de la Célula
20.
Br J Radiol ; 96(1145): 20220704, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-36802348

RESUMEN

OBJECTIVE: The study aims to evaluate the diagnostic efficacy of radiologists and radiology trainees in digital breast tomosynthesis (DBT) alone vs DBT plus synthesized view (SV) for an understanding of the adequacy of DBT images to identify cancer lesions. METHODS: Fifty-five observers (30 radiologists and 25 radiology trainees) participated in reading a set of 35 cases (15 cancer) with 28 readers reading DBT and 27 readers reading DBT plus SV. Two groups of readers had similar experience in interpreting mammograms. The performances of participants in each reading mode were compared with the ground truth and calculated in term of specificity, sensitivity, and ROC AUC. The cancer detection rate in various levels of breast density, lesion types and lesion sizes between 'DBT' and 'DBT + SV' were also analyzed. The difference in diagnostic accuracy of readers between two reading modes was assessed using Man-Whitney U test. p < 0.05 indicated a significant result. RESULTS: There was no significant difference in specificity (0.67-vs-0.65; p = 0.69), sensitivity (0.77-vs-0.71; p = 0.09), ROC AUC (0.77-vs-0.73; p = 0.19) of radiologists reading DBT plus SV compared with radiologists reading DBT. Similar result was found in radiology trainees with no significant difference in specificity (0.70-vs-0.63; p = 0.29), sensitivity (0.44-vs-0.55; p = 0.19), ROC AUC (0.59-vs-0.62; p = 0.60) between two reading modes. Radiologists and trainees obtained similar results in two reading modes for cancer detection rate with different levels of breast density, cancer types and sizes of lesions (p > 0.05). CONCLUSION: Findings show that the diagnostic performances of radiologists and radiology trainees in DBT alone and DBT plus SV were equivalent in identifying cancer and normal cases. ADVANCES IN KNOWLEDGE: DBT alone had equivalent diagnostic accuracy as DBT plus SV which could imply the consideration of using DBT as a sole modality without SV.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Mamografía , Radiólogos , Radiólogos/normas , Radiólogos/estadística & datos numéricos , Mama/diagnóstico por imagen , Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Mamografía/normas , Procesamiento de Imagen Asistido por Computador/normas , Humanos , Femenino , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA