Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37655047

RESUMEN

Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months-3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing "LEFT" versus "RIGHT" and "ON" versus "OFF" looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research.

2.
Appl Opt ; 62(18): 4987-5002, 2023 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-37707277

RESUMEN

The wide field survey telescope (WFST) is a 2.5 m optical survey telescope currently under construction in China. The telescope employs a primary-focus optical design to achieve a wide field of view of 3 deg, and its focal plane is equipped with four pairs of curvature sensors to perform wavefront sensing and active optics. Currently, there are several wavefront solution algorithms available for curvature sensors, including the iterative fast Fourier transform method, orthogonal series expansion method, Green's function method, and sensitivity matrix method. However, each of these methods has limitations in practical use. This study proposes a solution method based on a convolutional neural network model with a U-Net structure for the curvature wavefront sensing of the WFST. Numerical simulations show that the model, when properly trained, has a high accuracy and performs a curvature wavefront solution effectively. Upon a comparison with the sensitivity matrix method, this new method demonstrates its superiority. Finally, the study is summarized, and the drawbacks of the proposed method are discussed, which leads to direction for future optimizations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA