Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29.163
Filtrar
1.
Food Chem ; 462: 140911, 2025 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-39213969

RESUMEN

This study presents a low-cost smartphone-based imaging technique called smartphone video imaging (SVI) to capture short videos of samples that are illuminated by a colour-changing screen. Assisted by artificial intelligence, the study develops new capabilities to make SVI a versatile imaging technique such as the hyperspectral imaging (HSI). SVI enables classification of samples with heterogeneous contents, spatial representation of analyte contents and reconstruction of hyperspectral images from videos. When integrated with a residual neural network, SVI outperforms traditional computer vision methods for ginseng classification. Moreover, the technique effectively maps the spatial distribution of saffron purity in powder mixtures with predictive performance that is comparable to that of HSI. In addition, SVI combined with the U-Net deep learning module can produce high-quality images that closely resemble the target images acquired by HSI. These results suggest that SVI can serve as a consumer-oriented solution for food authentication.


Asunto(s)
Teléfono Inteligente , Imágenes Hiperespectrales/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Contaminación de Alimentos/análisis , Grabación en Video , Análisis de los Alimentos
2.
J Forensic Odontostomatol ; 42(2): 50-59, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39244766

RESUMEN

INTRODUCTION: The aim of this study was to evaluate whether a forensic odontologist working remotely could accurately undertake forensic dental identifications using videos produced by non-dental forensic staff operating an intra-oral video camera (IOVC). The study's aims were to assess the accuracy and time taken to perform remote forensic dental identifications in this manner. MATERIALS AND METHODS: Eight cadavers from the Centre for Anatomy and Human Identification (CAHID), University of Dundee, UK, were examined by a forensic odontologist via a traditional dental examination. Their dental condition was recorded to serve as ante-mortem records for this study. Videos of each dentition were produced using an IOVC operated by a medical student. Post-mortem records were produced for each dentition from the videos by a remote second forensic odontologist who was not present at the traditional dental examination. The ante-mortem and post-mortem records were then compared, and identification was classified as positively established, possible or excluded. RESULTS: Established identifications were positively made in all eight cases although there were some non-critical inconsistencies between ante-mortem and post-mortem records. Before the second opinion, 85.6% of the teeth per study subject were charted consistently. After the second opinion, the percentage of consistency increased to 97.2%. Each video on average was about 4.13 minutes in duration and the average time taken to interpret and chart the post-mortem dental examination at the first attempt was 11.63 minutes. The time taken to chart from the videos was greater than is typical of a traditional dental examination. CONCLUSION: This pilot study supports the feasibility of undertaking remote dental identification. This novel "tele-dental virtopsy" approach could be a viable alternative to a traditional post-mortem dental examination, in situations where access to forensic dental services is difficult or limited due to geographical, logistical, safety, and/or political reasons.


Asunto(s)
Odontología Forense , Grabación en Video , Humanos , Odontología Forense/métodos , Cadáver , Dentición , Autopsia/métodos , Consulta Remota , Registros Odontológicos
3.
Int Anesthesiol Clin ; 62(4): 48-58, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39233571

RESUMEN

Tracheal intubation is a fundamental facet of airway management, for which the importance of achieving success at the first attempt is well recognized. Failure to do so can lead to significant morbidity and mortality if there is inadequate patient oxygenation by alternate means. The evidence supporting the benefits of a videolaryngoscope in attaining this objective is now overwhelming (in adults). This has led to its increasing recognition in international airway management guidelines and its promotion from an occasional airway rescue tool to the first-choice device during routine airway management. However, usage in clinical practice does not currently reflect the increased worldwide availability that followed the upsurge in videolaryngoscope purchasing during the coronavirus disease 2019 pandemic. There are a number of obstacles to widespread adoption, including lack of adequate training, fears over de-skilling at direct laryngoscopy, equipment and cleaning costs, and concerns over the environmental impact, among others. It is now clear that in order for patients to benefit maximally from the technology and for airway managers to fully appreciate its role in everyday practice, proper training and education are necessary. Recent research evidence has addressed some existing barriers to default usage, and the emergence of techniques such as awake videolaryngoscopy and video-assisted flexible (bronchoscopic) intubation has also increased the scope of clinical application. Future studies will likely further confirm the superiority of videolaryngoscopy over direct laryngoscopy, therefore, it is incumbent upon all airway managers (and their teams) to gain expertise in videolaryngoscopy and to use it routinely in their everyday practice..


Asunto(s)
Manejo de la Vía Aérea , Intubación Intratraqueal , Laringoscopía , Humanos , Laringoscopía/métodos , Manejo de la Vía Aérea/métodos , Intubación Intratraqueal/métodos , COVID-19 , Laringoscopios , Grabación en Video
4.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(4): 715-723, 2024 Aug 25.
Artículo en Chino | MEDLINE | ID: mdl-39218597

RESUMEN

Animal localization and trajectory tracking are of great value for the study of brain spatial cognition and navigation neural mechanisms. However, traditional optical lens video positioning techniques are limited in their scope due to factors such as camera perspective. For pigeons with excellent spatial cognition and navigation abilities, based on the beacon positioning technology, a three-dimensional (3D) trajectory positioning and tracking method suitable for large indoor spaces was proposed, and the corresponding positioning principle and hardware structure were provided. The results of in vitro and in vivo experiments showed that the system could achieve centimeter-level positioning and trajectory tracking of pigeons in a space of 360 cm × 200 cm × 245 cm. Compared with traditional optical lens video positioning techniques, this system has the advantages of large space, high precision, and high response speed. It not only helps to study the neural mechanisms of pigeon 3D spatial cognition and navigation, but also has high reference value for trajectory tracking of other animals.


Asunto(s)
Columbidae , Navegación Espacial , Columbidae/fisiología , Animales , Navegación Espacial/fisiología , Imagenología Tridimensional , Grabación en Video , Cognición
5.
JMIR Form Res ; 8: e51513, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226540

RESUMEN

BACKGROUND: Coronary heart disease (CHD) is a leading cause of death worldwide and imposes a significant economic burden. TikTok has risen as a favored platform within the social media sphere for disseminating CHD-related information and stands as a pivotal resource for patients seeking knowledge about CHD. However, the quality of such content on TikTok remains largely unexplored. OBJECTIVE: This study aims to assess the quality of information conveyed in TikTok CHD-related videos. METHODS: A comprehensive cross-sectional study was undertaken on TikTok videos related to CHD. The sources of the videos were identified and analyzed. The comprehensiveness of content was assessed through 6 questions addressing the definition, signs and symptoms, risk factors, evaluation, management, and outcomes. The quality of the videos was assessed using 3 standardized evaluative instruments: DISCERN, the Journal of the American Medical Association (JAMA) benchmarks, and the Global Quality Scale (GQS). Furthermore, correlative analyses between video quality and characteristics of the uploaders and the videos themselves were conducted. RESULTS: The search yielded 145 CHD-related videos from TikTok, predominantly uploaded by health professionals (n=128, 88.3%), followed by news agencies (n=6, 4.1%), nonprofit organizations (n=10, 6.9%), and for-profit organizations (n=1, 0.7%). Content comprehensiveness achieved a median score of 3 (IQR 2-4). Median values for the DISCERN, JAMA, and GQS evaluations across all videos stood at 27 (IQR 24-32), 2 (IQR 2-2), and 2 (IQR 2-3), respectively. Videos from health professionals and nonprofit organizations attained significantly superior JAMA scores in comparison to those of news agencies (P<.001 and P=.02, respectively), whereas GQS scores for videos from health professionals were also notably higher than those from news agencies (P=.048). Within health professionals, cardiologists demonstrated discernibly enhanced performance over noncardiologists in both DISCERN and GQS assessments (P=.02). Correlative analyses unveiled positive correlations between video quality and uploader metrics, encompassing the positive correlations between the number of followers; total likes; average likes per video; and established quality indices such as DISCERN, JAMA, or GQS scores. Similar investigations relating to video attributes showed correlations between user engagement factors-likes, comments, collections, shares-and the aforementioned quality indicators. In contrast, a negative correlation emerged between the number of days since upload and quality indices, while a longer video duration corresponded positively with higher DISCERN and GQS scores. CONCLUSIONS: The quality of the videos was generally poor, with significant disparities based on source category. The content comprehensiveness coverage proved insufficient, casting doubts on the reliability and quality of the information relayed through these videos. Among health professionals, video contributions from cardiologists exhibited superior quality compared to noncardiologists. As TikTok's role in health information dissemination expands, ensuring accurate and reliable content is crucial to better meet patients' needs for CHD information that conventional health education fails to fulfill.


Asunto(s)
Enfermedad Coronaria , Medios de Comunicación Sociales , Grabación en Video , Estudios Transversales , Humanos , Difusión de la Información/métodos
8.
Nat Commun ; 15(1): 7629, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39223110

RESUMEN

Recent advances in technology for hyper-realistic visual and audio effects provoke the concern that deepfake videos of political speeches will soon be indistinguishable from authentic video. We conduct 5 pre-registered randomized experiments with N = 2215 participants to evaluate how accurately humans distinguish real political speeches from fabrications across base rates of misinformation, audio sources, question framings with and without priming, and media modalities. We do not find base rates of misinformation have statistically significant effects on discernment. We find deepfakes with audio produced by the state-of-the-art text-to-speech algorithms are harder to discern than the same deepfakes with voice actor audio. Moreover across all experiments and question framings, we find audio and visual information enables more accurate discernment than text alone: human discernment relies more on how something is said, the audio-visual cues, than what is said, the speech content.


Asunto(s)
Política , Habla , Grabación en Video , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Comunicación , Algoritmos
9.
Transl Vis Sci Technol ; 13(9): 5, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226062

RESUMEN

Purpose: The purpose of this study was to develop deep learning models for surgical video analysis, capable of identifying minimally invasive glaucoma surgery (MIGS) and locating the trabecular meshwork (TM). Methods: For classification of surgical steps, we had 313 video files (265 for cataract surgery and 48 for MIGS procedures), and for TM segmentation, we had 1743 frames (1110 for TM and 633 for no TM). We used transfer learning to update a classification model pretrained to recognize standard cataract surgical steps, enabling it to also identify MIGS procedures. For TM localization, we developed three different models: U-Net, Y-Net, and Cascaded. Segmentation accuracy for TM was measured by calculating the average pixel error between the predicted and ground truth TM locations. Results: Using transfer learning, we developed a model which achieved 87% accuracy for MIGS frame classification, with area under the receiver operating characteristic curve (AUROC) of 0.99. This model maintained a 79% accuracy for identifying 14 standard cataract surgery steps. The overall micro-averaged AUROC was 0.98. The U-Net model excelled in TM segmentation with an Intersection over union (IoU) score of 0.9988 and an average pixel error of 1.47. Conclusions: Building on prior work developing computer vision models for cataract surgical video, we developed models that recognize MIGS procedures and precisely localize the TM with superior performance. Our work demonstrates the potential of transfer learning for extending our computer vision models to new surgeries without the need for extensive additional data collection. Translational Relevance: Computer vision models in surgical videos can underpin the development of systems offering automated feedback for trainees, improving surgical training and patient care.


Asunto(s)
Extracción de Catarata , Aprendizaje Profundo , Malla Trabecular , Humanos , Malla Trabecular/cirugía , Extracción de Catarata/métodos , Procedimientos Quirúrgicos Mínimamente Invasivos , Glaucoma/cirugía , Glaucoma/diagnóstico , Curva ROC , Grabación en Video
10.
South Med J ; 117(9): 551-555, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39227049

RESUMEN

OBJECTIVES: The coronavirus disease 2019 pandemic catalyzed a rapid shift toward remote learning in medicine. This study hypothesized that using videos on adverse events and patient safety event reporting systems could enhance education and motivation among healthcare professionals, leading to improved performance on quizzes compared with those exposed to standard, in-person lectures. METHODS: Participants were randomly assigned to a group both watching the video and attending an in-person lecture or a group that received only the in-person lecture in this study performed in 2022. Surveys gathered demographic information, tested knowledge, and identified barriers to reporting adverse events. RESULTS: A total of 83 unique participants responded to the survey out of the 130 students enrolled (64%; 83/130). Among the students completing all of the surveys, the group who watched the Osmosis video had a higher average quiz score (6.46/7) than the lecture group (6.31/7) following the first intervention. Only 25% of respondents agreed or strongly agreed that they knew what to include in a patient safety report and only 10% agreed or strongly agreed that they knew how to access the reporting system. CONCLUSIONS: This study suggests virtual preclass video learning can be a beneficial tool to complement traditional lecture-based learning in medical education. Further research is needed to determine the efficacy of long-term video interventions in adverse events.


Asunto(s)
COVID-19 , Grabación en Video , Humanos , COVID-19/prevención & control , Femenino , Masculino , Seguridad del Paciente , Estudiantes de Medicina , Educación a Distancia/métodos , Educación de Pregrado en Medicina/métodos , Adulto , Evaluación Educacional/métodos , SARS-CoV-2 , Encuestas y Cuestionarios , Educación Médica/métodos , Errores Médicos/prevención & control
11.
J Sports Sci Med ; 23(1): 515-525, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39228769

RESUMEN

OpenPose-based motion analysis (OpenPose-MA), utilizing deep learning methods, has emerged as a compelling technique for estimating human motion. It addresses the drawbacks associated with conventional three-dimensional motion analysis (3D-MA) and human visual detection-based motion analysis (Human-MA), including costly equipment, time-consuming analysis, and restricted experimental settings. This study aims to assess the precision of OpenPose-MA in comparison to Human-MA, using 3D-MA as the reference standard. The study involved a cohort of 21 young and healthy adults. OpenPose-MA employed the OpenPose algorithm, a deep learning-based open-source two-dimensional (2D) pose estimation method. Human-MA was conducted by a skilled physiotherapist. The knee valgus angle during a drop vertical jump task was computed by OpenPose-MA and Human-MA using the same frontal-plane video image, with 3D-MA serving as the reference standard. Various metrics were utilized to assess the reproducibility, accuracy and similarity of the knee valgus angle between the different methods, including the intraclass correlation coefficient (ICC) (1, 3), mean absolute error (MAE), coefficient of multiple correlation (CMC) for waveform pattern similarity, and Pearson's correlation coefficients (OpenPose-MA vs. 3D-MA, Human-MA vs. 3D-MA). Unpaired t-tests were conducted to compare MAEs and CMCs between OpenPose-MA and Human-MA. The ICCs (1,3) for OpenPose-MA, Human-MA, and 3D-MA demonstrated excellent reproducibility in the DVJ trial. No significant difference between OpenPose-MA and Human-MA was observed in terms of the MAEs (OpenPose: 2.4° [95%CI: 1.9-3.0°], Human: 3.2° [95%CI: 2.1-4.4°]) or CMCs (OpenPose: 0.83 [range: 0.99-0.53], Human: 0.87 [range: 0.24-0.98]) of knee valgus angles. The Pearson's correlation coefficients of OpenPose-MA and Human-MA relative to that of 3D-MA were 0.97 and 0.98, respectively. This study demonstrated that OpenPose-MA achieved satisfactory reproducibility, accuracy and exhibited waveform similarity comparable to 3D-MA, similar to Human-MA. Both OpenPose-MA and Human-MA showed a strong correlation with 3D-MA in terms of knee valgus angle excursion.


Asunto(s)
Aprendizaje Profundo , Humanos , Reproducibilidad de los Resultados , Adulto Joven , Masculino , Femenino , Fenómenos Biomecánicos , Articulación de la Rodilla/fisiología , Grabación en Video , Adulto , Estudios de Tiempo y Movimiento , Algoritmos , Prueba de Esfuerzo/métodos , Ejercicio Pliométrico , Rango del Movimiento Articular/fisiología , Imagenología Tridimensional
12.
PLoS One ; 19(9): e0308461, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39231116

RESUMEN

The Ocean Networks Canada (ONC) cabled video-observatory at the Barkley Canyon Node (British Columbia, Canada) was recently the site of a Fish Acoustics and Attraction Experiment (FAAE), from May 21, 2022 to July 16, 2023, combining observations from High-Definition (HD) video, acoustic imaging sonar, and underwater sounds at a depth of 645 m, to examine the effects of light and bait on deep-sea fish and invertebrate behaviors. The unexpected presence of at least eight (six recurrent and two temporary) sub-adult male northern elephant seals (Mirounga angustirostris) was reported in 113 and 210 recordings out of 9737 HD and 2805 sonar videos at the site, respectively. Elephant seals were found at the site during seven distinct periods between June 22, 2022 and May 19, 2023. Ethograms provided insights into the seal's deep-sea resting and foraging strategies, including prey selection. We hypothesized that the ability of elephant seals to perform repeated visits to the same site over long periods (> 10 days) was due to the noise generated by the sonar, suggesting that they learned to use that anthropogenic source as an indicator of food location, also known as the "dinner bell" effect. One interpretation is that elephant seals are attracted to the FAAE site due to the availability of prey and use the infrastructure as a foraging and resting site, but then take advantage of fish disturbance caused by the camera lights to improve foraging success. Our video observations demonstrated that northern elephant seals primarily focused on actively swimming sablefish (Anoplopoma fimbria), ignoring stationary or drifting prey. Moreover, we found that elephant seals appear to produce (voluntary or involuntary) infrasonic sounds in a foraging context. This study highlights the utility of designing marine observatories with spatially and temporally cross-referenced data collection from instruments representing multiple modalities of observation.


Asunto(s)
Phocidae , Grabación en Video , Animales , Phocidae/fisiología , Masculino , Conducta Animal/fisiología , Colombia Británica , Conducta Predatoria/fisiología , Acústica
13.
Sci Rep ; 14(1): 20584, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232015

RESUMEN

Undercover videos have become a popular tool among NGOs to influence public opinion and generate engagement for the NGO's cause. These videos are seen as a powerful and cost-effective way of bringing about social change, as they provide first-hand evidence and generate a strong emotional response among those who see them. In this paper, we empirically assess the impact of undercover videos on support for the cause. We in addition analyze whether the increased engagement among viewers is driven by the negative emotional reactions produced by the video. To do so, we design an online experiment that enables us to estimate both the total and emotion-mediated treatment effects on engagement by randomly exposing participants to an undercover video (of animal abuse) and randomly introducing a cooling-off period. Using a representative sample of the French population (N=3,310), we find that the video successfully increases actions in favor of animals (i.e., donations to NGOs and petitions), but we fail to prove that this effect is due to the presence of primary emotions induced by the video. Last, we investigate whether activists correctly anticipate their undercover videos' (emotional) impact via a prediction study involving activists (exploratory analysis). PROTOCOL REGISTRATION: This manuscript is a Stage-2 working paper of a Registered Report that received In-Principle-Acceptance from Scientific Reports on November 20th, 2023 [ Link to Stage-1 ]. The Stage-1 that received In-Principal-Acceptance can be found here: https://osf.io/8cg2d .


Asunto(s)
Emociones , Conducta Social , Grabación en Video , Humanos , Emociones/fisiología , Masculino , Femenino , Adulto , Opinión Pública , Animales , Persona de Mediana Edad , Adulto Joven , Bienestar del Animal
14.
Sci Rep ; 14(1): 20604, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232044

RESUMEN

Lung cancer has emerged as a major global public health concern. With growing public interest in lung cancer, online searches for related information have surged. However, a comprehensive evaluation of the credibility, quality, and value of lung cancer-related videos on digital media platforms remains unexamined. This study aimed to assess the informational quality and content of lung cancer-related videos on Douyin and Bilibili. A total of 200 lung cancer-related videos that met the criteria were selected from Douyin and Bilibili for evaluation and analysis. The first step involved recording and analyzing the basic information provided in the videos. Subsequently, the source and type of content for each video were identified. All videos' educational content and quality were then evaluated using JAMA, GQS, and Modified DISCERN. Douyin videos were found to be more popular in terms of likes, comments, favorites, and shares, whereas Bilibili videos were longer in duration (P < .001). The majority of video content on both platforms comprised lung cancer introductions (31/100, 31%), with medical professionals being the primary source of uploaded videos (Douyin, n = 55, 55%; Bilibili, n = 43, 43%). General users on Douyin scored the lowest on the JAMA scale, whereas for-profit businesses scored the highest (2.50 points). The results indicated that the videos' informational quality was insufficient. Videos from science communications and health professionals were deemed more reliable regarding completeness and content quality compared to videos from other sources. The public should exercise caution and consider the scientific validity when seeking healthcare information on short video platforms.


Asunto(s)
Neoplasias Pulmonares , Humanos , China , Grabación en Video , Difusión de la Información/métodos , Fuentes de Información
15.
Medicine (Baltimore) ; 103(22): e38437, 2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-39259074

RESUMEN

In this study, we analyzed the efficacy of animated educational videos and group nursing in the treatment of severe pneumonia in children. A total of 140 patients with severe pneumonia in our hospital from October 2022 to October 2023 were selected as the research subjects, and they were divided into a control group and an observation group. The control group received routine care, while the observation group received animated educational videos and cluster nursing interventions. The treatment effects of the 2 groups of patients were compared. Clinical indicators such as body temperature recovery time, blood oxygen saturation recovery time, heart rate recovery time, consciousness recovery time, and respiratory rate recovery time were compared between the 2 groups of patients. The results showed that the temperature recovery time, oxygen saturation recovery time, heart rate recovery time and respiratory rate recovery time in observation group were significantly different from those in control group (P < .05). Univariate analysis showed that families with or without anxiety disorder had statistically significant differences in economic conditions, extrapulmonary complications, nursing methods and other aspects. Logistic multivariate regression analysis showed that nursing methods, extrapulmonary complications, and poor economic conditions (income < 5000) were risk factors for anxiety among family members of severe pneumonia patients, while good economic conditions (income > 5000) were protective factors. So, animated educational videos and bundled care can effectively improve the nursing effectiveness of children with severe pneumonia and promote their recovery.


Asunto(s)
Neumonía , Humanos , Masculino , Femenino , Estudios de Casos y Controles , Preescolar , Lactante , Niño , Paquetes de Atención al Paciente/métodos , Grabación en Video , Índice de Severidad de la Enfermedad
16.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39275572

RESUMEN

Geoffroy's spider monkeys, an endangered, fast-moving arboreal primate species with a large home range and a high degree of fission-fusion dynamics, are challenging to survey in their natural habitats. Our objective was to evaluate how different flight parameters affect the detectability of spider monkeys in videos recorded by a drone equipped with a thermal infrared camera and examine the level of agreement between coders. We used generalized linear mixed models to evaluate the impact of flight speed (2, 4, 6 m/s), flight height (40, 50 m above ground level), and camera angle (-45°, -90°) on spider monkey counts in a closed-canopy forest in the Yucatan Peninsula, Mexico. Our results indicate that none of the three flight parameters affected the number of detected spider monkeys. Agreement between coders was "substantial" (Fleiss' kappa coefficient = 0.61-0.80) in most cases for high thermal-contrast zones. Our study contributes to the development of standardized flight protocols, which are essential to obtain accurate data on the presence and abundance of wild populations. Based on our results, we recommend performing drone surveys for spider monkeys and other medium-sized arboreal mammals with a small commercial drone at a 4 m/s speed, 15 m above canopy height, and with a -90° camera angle. However, these recommendations may vary depending on the size and noise level produced by the drone model.


Asunto(s)
Atelinae , Bosques , Rayos Infrarrojos , Animales , Atelinae/fisiología , Aeronaves , México , Ecosistema , Grabación en Video/métodos , Vuelo Animal/fisiología
18.
J Neurosci Methods ; 411: 110270, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39222797

RESUMEN

BACKGROUND: The development of Raspberry Pi-based recording devices for video analyses of drug self-administration studies has been shown to be promising in terms of affordability, customizability, and capacity to extract in-depth behavioral patterns. Yet, most video recording systems are limited to a few cameras making them incompatible with large-scale studies. NEW METHOD: We expanded the PiRATeMC (Pi-based Remote Acquisition Technology for Motion Capture) recording system by increasing its scale, modifying its code, and adding equipment to accommodate large-scale video acquisition, accompanied by data on throughput capabilities, video fidelity, synchronicity of devices, and comparisons between Raspberry Pi 3B+ and 4B models. RESULTS: Using PiRATeMC default recording parameters resulted in minimal storage (∼350MB/h), high throughput (< ∼120 seconds/Pi), high video fidelity, and synchronicity within ∼0.02 seconds, affording the ability to simultaneously record 60 animals in individual self-administration chambers for various session lengths at a fraction of commercial costs. No consequential differences were found between Raspberry Pi models. COMPARISON WITH EXISTING METHOD(S): This system allows greater acquisition of video data simultaneously than other video recording systems by an order of magnitude with less storage needs and lower costs. Additionally, we report in-depth quantitative assessments of throughput, fidelity, and synchronicity, displaying real-time system capabilities. CONCLUSIONS: The system presented is able to be fully installed in a month's time by a single technician and provides a scalable, low cost, and quality-assured procedure with a high-degree of customization and synchronicity between recording devices, capable of recording a large number of subjects and timeframes with high turnover in a variety of species and settings.


Asunto(s)
Condicionamiento Operante , Grabación en Video , Animales , Grabación en Video/métodos , Grabación en Video/instrumentación , Condicionamiento Operante/fisiología , Masculino , Autoadministración/instrumentación , Ratas , Conducta Animal/fisiología , Cocaína/administración & dosificación
19.
JAMA Netw Open ; 7(9): e2432851, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39264628

RESUMEN

Importance: Stereotypical motor movements (SMMs) are a form of restricted and repetitive behavior, which is a core symptom of autism spectrum disorder (ASD). Current quantification of SMM severity is extremely limited, with studies relying on coarse and subjective caregiver reports or laborious manual annotation of short video recordings. Objective: To assess the utility of a new open-source AI algorithm that can analyze extensive video recordings of children and automatically identify segments with heterogeneous SMMs, thereby enabling their direct and objective quantification. Design, Setting, and Participants: This retrospective cohort study included 241 children (aged 1.4 to 8.0 years) with ASD. Video recordings of 319 behavioral assessments carried out at the Azrieli National Centre for Autism and Neurodevelopment Research in Israel between 2017 and 2021 were extracted. Behavioral assessments included cognitive, language, and autism diagnostic observation schedule, 2nd edition (ADOS-2) assessments. Data were analyzed from October 2020 to May 2024. Exposures: Each assessment was recorded with 2 to 4 cameras, yielding 580 hours of video footage. Within these extensive video recordings, manual annotators identified 7352 video segments containing heterogeneous SMMs performed by different children (21.14 hours of video). Main outcomes and measures: A pose estimation algorithm was used to extract skeletal representations of all individuals in each video frame and was trained an object detection algorithm to identify the child in each video. The skeletal representation of the child was then used to train an SMM recognition algorithm using a 3 dimensional convolutional neural network. Data from 220 children were used for training and data from the remaining 21 children were used for testing. Results: Among 319 behavioral assessment recordings from 241 children (172 [78%] male; mean [SD] age, 3.97 [1.30] years), the algorithm accurately detected 92.53% (95% CI, 81.09%-95.10%) of manually annotated SMMs in our test data with 66.82% (95% CI, 55.28%-72.05%) precision. Overall number and duration of algorithm-identified SMMs per child were highly correlated with manually annotated number and duration of SMMs (r = 0.8; 95% CI, 0.67-0.93; P < .001; and r = 0.88; 95% CI, 0.74-0.96; P < .001, respectively). Conclusions and relevance: This study suggests the ability of an algorithm to identify a highly diverse range of SMMs and quantify them with high accuracy, enabling objective and direct estimation of SMM severity in individual children with ASD.


Asunto(s)
Algoritmos , Trastorno del Espectro Autista , Grabación en Video , Humanos , Trastorno del Espectro Autista/diagnóstico , Trastorno del Espectro Autista/fisiopatología , Niño , Preescolar , Masculino , Femenino , Estudios Retrospectivos , Lactante , Trastorno de Movimiento Estereotipado/diagnóstico , Conducta Estereotipada , Israel
20.
BMC Bioinformatics ; 22(Suppl 5): 638, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266977

RESUMEN

BACKGROUND: Mild cognitive impairment (MCI) is the transition stage between the cognitive decline expected in normal aging and more severe cognitive decline such as dementia. The early diagnosis of MCI plays an important role in human healthcare. Current methods of MCI detection include cognitive tests to screen for executive function impairments, possibly followed by neuroimaging tests. However, these methods are expensive and time-consuming. Several studies have demonstrated that MCI and dementia can be detected by machine learning technologies from different modality data. This study proposes a multi-stream convolutional neural network (MCNN) model to predict MCI from face videos. RESULTS: The total effective data are 48 facial videos from 45 participants, including 35 videos from normal cognitive participants and 13 videos from MCI participants. The videos are divided into several segments. Then, the MCNN captures the latent facial spatial features and facial dynamic features of each segment and classifies the segment as MCI or normal. Finally, the aggregation stage produces the final detection results of the input video. We evaluate 27 MCNN model combinations including three ResNet architectures, three optimizers, and three activation functions. The experimental results showed that the ResNet-50 backbone with Swish activation function and Ranger optimizer produces the best results with an F1-score of 89% at the segment level. However, the ResNet-18 backbone with Swish and Ranger achieves the F1-score of 100% at the participant level. CONCLUSIONS: This study presents an efficient new method for predicting MCI from facial videos. Studies have shown that MCI can be detected from facial videos, and facial data can be used as a biomarker for MCI. This approach is very promising for developing accurate models for screening MCI through facial data. It demonstrates that automated, non-invasive, and inexpensive MCI screening methods are feasible and do not require highly subjective paper-and-pencil questionnaires. Evaluation of 27 model combinations also found that ResNet-50 with Swish is more stable for different optimizers. Such results provide directions for hyperparameter tuning to further improve MCI predictions.


Asunto(s)
Disfunción Cognitiva , Redes Neurales de la Computación , Disfunción Cognitiva/diagnóstico , Humanos , Anciano , Aprendizaje Automático , Masculino , Femenino , Cara/diagnóstico por imagen , Grabación en Video/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA