Your browser doesn't support javascript.
loading
Predicting cell cycle stage from 3D single-cell nuclear-stained images.
Li, Gang; Nichols, Eva K; Browning, Valentino E; Longhi, Nicolas J; Camplisson, Conor; Beliveau, Brian J; Noble, William Stafford.
Afiliación
  • Li G; Department of Genome Sciences, University of Washington.
  • Nichols EK; eScience Institute, University of Washington.
  • Browning VE; Department of Genome Sciences, University of Washington.
  • Longhi NJ; Department of Genome Sciences, University of Washington.
  • Camplisson C; Department of Bioengineering, University of Washington.
  • Beliveau BJ; Department of Genome Sciences, University of Washington.
  • Noble WS; Department of Genome Sciences, University of Washington.
bioRxiv ; 2024 Sep 01.
Article en En | MEDLINE | ID: mdl-39257739
ABSTRACT
The cell cycle governs the proliferation, differentiation, and regeneration of all eukaryotic cells. Profiling cell cycle dynamics is therefore central to basic and biomedical research spanning development, health, aging, and disease. However, current approaches to cell cycle profiling involve complex interventions that may confound experimental interpretation. To facilitate more efficient cell cycle annotation of microscopy data, we developed CellCycleNet, a machine learning (ML) workflow designed to simplify cell cycle staging with minimal experimenter intervention and cost. CellCycleNet accurately predicts cell cycle phase using only a fluorescent nuclear stain (DAPI) in fixed interphase cells. Using the Fucci2a cell cycle reporter system as ground truth, we collected two benchmarking image datasets and trained two ML models-a support vector machine (SVM) and a deep neural network-to classify nuclei as being in either the G1 or S/G2 phases of the cell cycle. Our results suggest that CellCycleNet outperforms state-of-the-art SVM models on each dataset individually. When trained on two image datasets simultaneously, CellCycleNet achieves the highest classification accuracy, with an improvement in AUROC of 0.08-0.09. The model also demonstrates excellent generalization across different microscopes, achieving an AUROC of 0.95. Overall, using features derived from 3D images, rather than 2D projections of those same images, significantly improves classification performance. We have released our image data, trained models, and software as a community resource.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: BioRxiv Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: BioRxiv Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos