Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 379
Filtrar
1.
Int J Behav Nutr Phys Act ; 21(1): 99, 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39256837

RESUMEN

BACKGROUND: Accurately measuring energy expenditure during physical activity outside of the laboratory is challenging, especially on a large scale. Thigh-worn accelerometers have gained popularity due to the possibility to accurately detect physical activity types. The use of machine learning techniques for activity classification and energy expenditure prediction may improve accuracy over current methods. Here, we developed a novel composite energy expenditure estimation model by combining an activity classification model with a stride specific energy expenditure model for walking, running, and cycling. METHODS: We first trained a supervised deep learning activity classification model using pooled data from available adult accelerometer datasets. The composite energy expenditure model was then developed and validated using additional data based on a sample of 69 healthy adult participants (49% female; age = 25.2 ± 5.8 years) who completed a standardised activity protocol with indirect calorimetry as the reference measure. RESULTS: The activity classification model showed an overall accuracy of 99.7% across all five activity types during validation. The composite model for estimating energy expenditure achieved a mean absolute percentage error of 10.9%. For running, walking, and cycling, the composite model achieved a mean absolute percentage error of 6.6%, 7.9% and 16.1%, respectively. CONCLUSIONS: The integration of thigh-worn accelerometers with machine learning models provides a highly accurate method for classifying physical activity types and estimating energy expenditure. Our novel composite model approach improves the accuracy of energy expenditure measurements and supports better monitoring and assessment methods in non-laboratory settings.


Asunto(s)
Acelerometría , Ciclismo , Metabolismo Energético , Carrera , Muslo , Caminata , Humanos , Metabolismo Energético/fisiología , Femenino , Acelerometría/métodos , Adulto , Masculino , Caminata/fisiología , Carrera/fisiología , Adulto Joven , Ciclismo/fisiología , Calorimetría Indirecta/métodos , Ejercicio Físico/fisiología , Aprendizaje Automático
2.
Technol Health Care ; 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39269866

RESUMEN

BACKGROUND: A daily activity routine is vital for overall health and well-being, supporting physical and mental fitness. Consistent physical activity is linked to a multitude of benefits for the body, mind, and emotions, playing a key role in raising a healthy lifestyle. The use of wearable devices has become essential in the realm of health and fitness, facilitating the monitoring of daily activities. While convolutional neural networks (CNN) have proven effective, challenges remain in quickly adapting to a variety of activities. OBJECTIVE: This study aimed to develop a model for precise recognition of human activities to revolutionize health monitoring by integrating transformer models with multi-head attention for precise human activity recognition using wearable devices. METHODS: The Human Activity Recognition (HAR) algorithm uses deep learning to classify human activities using spectrogram data. It uses a pretrained convolution neural network (CNN) with a MobileNetV2 model to extract features, a dense residual transformer network (DRTN), and a multi-head multi-level attention architecture (MH-MLA) to capture time-related patterns. The model then blends information from both layers through an adaptive attention mechanism and uses a SoftMax function to provide classification probabilities for various human activities. RESULTS: The integrated approach, combining pretrained CNN with transformer models to create a thorough and effective system for recognizing human activities from spectrogram data, outperformed these methods in various datasets - HARTH, KU-HAR, and HuGaDB produced accuracies of 92.81%, 97.98%, and 95.32%, respectively. This suggests that the integration of diverse methodologies yields good results in capturing nuanced human activities across different activities. The comparison analysis showed that the integrated system consistently performs better for dynamic human activity recognition datasets. CONCLUSION: In conclusion, maintaining a routine of daily activities is crucial for overall health and well-being. Regular physical activity contributes substantially to a healthy lifestyle, benefiting both the body and the mind. The integration of wearable devices has simplified the monitoring of daily routines. This research introduces an innovative approach to human activity recognition, combining the CNN model with a dense residual transformer network (DRTN) with multi-head multi-level attention (MH-MLA) within the transformer architecture to enhance its capability.

3.
J Phys Act Health ; 21(10): 1092-1099, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39159934

RESUMEN

BACKGROUND: The ActiPASS software was developed from the open-source Acti4 activity classification algorithm for thigh-worn accelerometry. However, the original algorithm has not been validated in children or compared with a child-specific set of algorithm thresholds. This study aims to evaluate the accuracy of ActiPASS in classifying activity types in children using 2 published sets of Acti4 thresholds. METHODS: Laboratory and free-living data from 2 previous studies were used. The laboratory condition included 41 school-aged children (11.0 [4.8] y; 46.5% male), and the free-living condition included 15 children (10.0 [2.6] y; 66.6% male). Participants wore a single accelerometer on the dominant thigh, and annotated video recordings were used as a reference. Postures and activity types were classified with ActiPASS using the original adult thresholds and a child-specific set of thresholds. RESULTS: Using the original adult thresholds, the mean balanced accuracy (95% CI) for the laboratory condition ranged from 0.62 (0.56-0.67) for lying to 0.97 (0.94-0.99) for running. For the free-living condition, accuracy ranged from 0.61 (0.48-0.75) for lying to 0.96 (0.92-0.99) for cycling. Mean balanced accuracy for overall sedentary behavior (sitting and lying) was ≥0.97 (0.95-0.99) across all thresholds and conditions. No meaningful differences were found between the 2 sets of thresholds, except for superior balanced accuracy of the adult thresholds for walking under laboratory conditions. CONCLUSIONS: The results indicate that ActiPASS can accurately classify different basic types of physical activity and sedentary behavior in children using thigh-worn accelerometer data.

4.
Front Bioeng Biotechnol ; 12: 1398291, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39175622

RESUMEN

Introduction: Falls are a major cause of accidents that can lead to serious injuries, especially among geriatric populations worldwide. Ensuring constant supervision in hospitals or smart environments while maintaining comfort and privacy is practically impossible. Therefore, fall detection has become a significant area of research, particularly with the use of multimodal sensors. The lack of efficient techniques for automatic fall detection hampers the creation of effective preventative tools capable of identifying falls during physical exercise in long-term care environments. The primary goal of this article is to examine the benefits of using multimodal sensors to enhance the precision of fall detection systems. Methods: The proposed paper combines time-frequency features of inertial sensors with skeleton-based modeling of depth sensors to extract features. These multimodal sensors are then integrated using a fusion technique. Optimization and a modified K-Ary classifier are subsequently applied to the resultant fused data. Results: The suggested model achieved an accuracy of 97.97% on the UP-Fall Detection dataset and 97.89% on the UR-Fall Detection dataset. Discussion: This indicates that the proposed model outperforms state-of-the-art classification results. Additionally, the proposed model can be utilized as an IoT-based solution, effectively promoting the development of tools to prevent fall-related injuries.

5.
Sensors (Basel) ; 24(15)2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39124043

RESUMEN

The behavior of pedestrians in a non-constrained environment is difficult to predict. In wearable robotics, this poses a challenge, since devices like lower-limb exoskeletons and active orthoses need to support different walking activities, including level walking and climbing stairs. While a fixed movement trajectory can be easily supported, switches between these activities are difficult to predict. Moreover, the demand for these devices is expected to rise in the years ahead. In this work, we propose a cloud software system for use in wearable robotics, based on geographical mapping techniques and Human Activity Recognition (HAR). The system aims to give context to the surrounding pedestrians by providing hindsight information. The system was partially implemented and tested. The results indicate a viable concept with great extensibility prospects.


Asunto(s)
Nube Computacional , Movimiento (Física) , Robótica , Dispositivos Electrónicos Vestibles , Humanos , Caminata , Actividades Humanas , Algoritmos
6.
Sensors (Basel) ; 24(16)2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39205129

RESUMEN

Human activity recognition (HAR) is a crucial task in various applications, including healthcare, fitness, and the military. Deep learning models have revolutionized HAR, however, their computational complexity, particularly those involving BiLSTMs, poses significant challenges for deployment on resource-constrained devices like smartphones. While BiLSTMs effectively capture long-term dependencies by processing inputs bidirectionally, their high parameter count and computational demands hinder practical applications in real-time HAR. This study investigates the approximation of the computationally intensive BiLSTM component in a HAR model by using a combination of alternative model components and data flipping augmentation. The proposed modifications to an existing hybrid model architecture replace the BiLSTM with standard and residual LSTM, along with convolutional networks, supplemented by data flipping augmentation to replicate the context awareness typically provided by BiLSTM networks. The results demonstrate that the residual LSTM (ResLSTM) model achieves superior performance while maintaining a lower computational complexity compared to the traditional BiLSTM model. Specifically, on the UCI-HAR dataset, the ResLSTM model attains an accuracy of 96.34% with 576,702 parameters, outperforming the BiLSTM model's accuracy of 95.22% with 849,534 parameters. On the WISDM dataset, the ResLSTM achieves an accuracy of 97.20% with 192,238 parameters, compared to the BiLSTM's 97.23% accuracy with 283,182 parameters, demonstrating a more efficient architecture with minimal performance trade-off. For the KU-HAR dataset, the ResLSTM model achieves an accuracy of 97.05% with 386,038 parameters, showing comparable performance to the BiLSTM model's 98.63% accuracy with 569,462 parameters, but with significantly fewer parameters.


Asunto(s)
Aprendizaje Profundo , Actividades Humanas , Humanos , Redes Neurales de la Computación , Algoritmos , Teléfono Inteligente
7.
Sensors (Basel) ; 24(16)2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39205143

RESUMEN

This study introduces an innovative approach by incorporating statistical offset features, range profiles, time-frequency analyses, and azimuth-range-time characteristics to effectively identify various human daily activities. Our technique utilizes nine feature vectors consisting of six statistical offset features and three principal component analysis network (PCANet) fusion attributes. These statistical offset features are derived from combined elevation and azimuth data, considering their spatial angle relationships. The fusion attributes are generated through concurrent 1D networks using CNN-BiLSTM. The process begins with the temporal fusion of 3D range-azimuth-time data, followed by PCANet integration. Subsequently, a conventional classification model is employed to categorize a range of actions. Our methodology was tested with 21,000 samples across fourteen categories of human daily activities, demonstrating the effectiveness of our proposed solution. The experimental outcomes highlight the superior robustness of our method, particularly when using the Margenau-Hill Spectrogram for time-frequency analysis. When employing a random forest classifier, our approach outperformed other classifiers in terms of classification efficacy, achieving an average sensitivity, precision, F1, specificity, and accuracy of 98.25%, 98.25%, 98.25%, 99.87%, and 99.75%, respectively.


Asunto(s)
Algoritmos , Análisis de Componente Principal , Humanos , Actividades Humanas/clasificación , Radar , Redes Neurales de la Computación , Actividades Cotidianas
8.
Sensors (Basel) ; 24(14)2024 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-39065939

RESUMEN

The characterization of human behavior in real-world contexts is critical for developing a comprehensive model of human health. Recent technological advancements have enabled wearables and sensors to passively and unobtrusively record and presumably quantify human behavior. Better understanding human activities in unobtrusive and passive ways is an indispensable tool in understanding the relationship between behavioral determinants of health and diseases. Adult individuals (N = 60) emulated the behaviors of smoking, exercising, eating, and medication (pill) taking in a laboratory setting while equipped with smartwatches that captured accelerometer data. The collected data underwent expert annotation and was used to train a deep neural network integrating convolutional and long short-term memory architectures to effectively segment time series into discrete activities. An average macro-F1 score of at least 85.1 resulted from a rigorous leave-one-subject-out cross-validation procedure conducted across participants. The score indicates the method's high performance and potential for real-world applications, such as identifying health behaviors and informing strategies to influence health. Collectively, we demonstrated the potential of AI and its contributing role to healthcare during the early phases of diagnosis, prognosis, and/or intervention. From predictive analytics to personalized treatment plans, AI has the potential to assist healthcare professionals in making informed decisions, leading to more efficient and tailored patient care.


Asunto(s)
Actividades Humanas , Redes Neurales de la Computación , Dispositivos Electrónicos Vestibles , Humanos , Adulto , Masculino , Femenino , Acelerometría/métodos , Ejercicio Físico/fisiología
9.
Data Brief ; 55: 110673, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39049967

RESUMEN

Human Activity Recognition (HAR) has emerged as a critical research area due to its extensive applications in various real-world domains. Numerous CSI-based datasets have been established to support the development and evaluation of advanced HAR algorithms. However, existing CSI-based HAR datasets are frequently limited by a dearth of complexity and diversity in the activities represented, hindering the design of robust HAR models. These limitations typically manifest as a narrow focus on a limited range of activities or the exclusion of factors influencing real-world CSI measurements. Consequently, the scarcity of diverse training data can impede the development of efficient HAR systems. To address the limitations of existing datasets, this paper introduces a novel dataset that captures spatial diversity through multiple transceiver orientations over a high dimensional space encompassing a large number of subcarriers. The dataset incorporates a wider range of real-world factors including extensive activity range, a spectrum of human movements (encompassing both micro-and macro-movements), variations in body composition, and diverse environmental conditions (noise and interference). The experiment is performed in a controlled laboratory environment with dimensions of 5 m (width) × 8 m (length) × 3 m (height) to capture CSI measurements for various human activities. Four ESP32-S3-DevKitC-1 devices, configured as transceiver pairs with unique Media Access Control (MAC) addresses, collect CSI data according to the Wi-Fi IEEE 802.11n standard. Mounted on tripods at a height of 1.5 m, the transmitter devices (powered by external power banks) positioned at north and east send multiple Wi-Fi beacons to their respective receivers (connected to laptops via USB for data collection) located at south and west. To capture multi-perspective CSI data, all six participants sequentially performed designated activities while standing in the centre of the tripod arrangement for 5 s per sample. The system collected approximately 300-450 packets per sample for approximately 1200 samples per activity, capturing CSI information across the 166 subcarriers employed in the Wi-Fi IEEE 802.11n standard. By leveraging the richness of this dataset, HAR researchers can develop more robust and generalizable CSI-based HAR models. Compared to traditional HAR approaches, these CSI-based models hold the promise of significantly enhanced accuracy and robustness when deployed in real-world scenarios. This stems from their ability to capture the nuanced dynamics of human movement through the analysis of wireless channel characteristic from different spatial variations (utilizing two-diagonal ESP32 transceivers configuration) with higher degree of dimensionality (166 subcarriers).

10.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39066043

RESUMEN

Human activity recognition (HAR) is pivotal in advancing applications ranging from healthcare monitoring to interactive gaming. Traditional HAR systems, primarily relying on single data sources, face limitations in capturing the full spectrum of human activities. This study introduces a comprehensive approach to HAR by integrating two critical modalities: RGB imaging and advanced pose estimation features. Our methodology leverages the strengths of each modality to overcome the drawbacks of unimodal systems, providing a richer and more accurate representation of activities. We propose a two-stream network that processes skeletal and RGB data in parallel, enhanced by pose estimation techniques for refined feature extraction. The integration of these modalities is facilitated through advanced fusion algorithms, significantly improving recognition accuracy. Extensive experiments conducted on the UTD multimodal human action dataset (UTD MHAD) demonstrate that the proposed approach exceeds the performance of existing state-of-the-art algorithms, yielding improved outcomes. This study not only sets a new benchmark for HAR systems but also highlights the importance of feature engineering in capturing the complexity of human movements and the integration of optimal features. Our findings pave the way for more sophisticated, reliable, and applicable HAR systems in real-world scenarios.


Asunto(s)
Algoritmos , Actividades Humanas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento/fisiología , Postura/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos
11.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-39066103

RESUMEN

As Canada's population of older adults rises, the need for aging-in-place solutions is growing due to the declining quality of long-term-care homes and long wait times. While the current standards include questionnaire-based assessments for monitoring activities of daily living (ADLs), there is an urgent need for advanced indoor localization technologies that ensure privacy. This study explores the use of Ultra-Wideband (UWB) technology for activity recognition in a mock condo in the Glenrose Rehabilitation Hospital. UWB systems with built-in Inertial Measurement Unit (IMU) sensors were tested, using anchors set up across the condo and a tag worn by patients. We tested various UWB setups, changed the number of anchors, and varied the tag placement (on the wrist or chest). Wrist-worn tags consistently outperformed chest-worn tags, and the nine-anchor configuration yielded the highest accuracy. Machine learning models were developed to classify activities based on UWB and IMU data. Models that included positional data significantly outperformed those that did not. The Random Forest model with a 4 s data window achieved an accuracy of 94%, compared to 79.2% when positional data were excluded. These findings demonstrate that incorporating positional data with IMU sensors is a promising method for effective remote patient monitoring.


Asunto(s)
Actividades Cotidianas , Aprendizaje Automático , Humanos , Monitoreo Ambulatorio/métodos , Monitoreo Ambulatorio/instrumentación , Dispositivos Electrónicos Vestibles , Acelerometría/instrumentación , Acelerometría/métodos , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/instrumentación
12.
Sci Rep ; 14(1): 15310, 2024 07 03.
Artículo en Inglés | MEDLINE | ID: mdl-38961136

RESUMEN

Human activity recognition has a wide range of applications in various fields, such as video surveillance, virtual reality and human-computer intelligent interaction. It has emerged as a significant research area in computer vision. GCN (Graph Convolutional networks) have recently been widely used in these fields and have made great performance. However, there are still some challenges including over-smoothing problem caused by stack graph convolutions and deficient semantics correlation to capture the large movements between time sequences. Vision Transformer (ViT) is utilized in many 2D and 3D image fields and has surprised results. In our work, we propose a novel human activity recognition method based on ViT (HAR-ViT). We integrate enhanced AGCL (eAGCL) in 2s-AGCN to ViT to make it process spatio-temporal data (3D skeleton) and make full use of spatial features. The position encoder module orders the non-sequenced information while the transformer encoder efficiently compresses sequence data features to enhance calculation speed. Human activity recognition is accomplished through multi-layer perceptron (MLP) classifier. Experimental results demonstrate that the proposed method achieves SOTA performance on three extensively used datasets, NTU RGB+D 60, NTU RGB+D 120 and Kinetics-Skeleton 400.


Asunto(s)
Actividades Humanas , Humanos , Redes Neurales de la Computación , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
13.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-39001122

RESUMEN

Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of 'scalograms', derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.


Asunto(s)
Actividades Humanas , Análisis de Ondículas , Humanos , Actividades Humanas/clasificación , Algoritmos , Aprendizaje Profundo , Dispositivos Electrónicos Vestibles , Actividades Cotidianas , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
14.
Data Brief ; 55: 110731, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39081492

RESUMEN

Given the popularity of wrist-worn devices, particularly smartwatches, the identification of manual movement patterns has become of utmost interest within the research field of Human Activity Recognition (HAR) systems. In this context, by leveraging the numerous sensors natively embedded in smartwatches, the HAR functionalities that can be implemented in a watch via software and in a very cost-efficient way cover a wide variety of applications, ranging from fitness trackers to gesture detectors aimed at disabled individuals (e.g., for sending alarms), promoting behavioral activation or healthy lifestyle habits. In this regard, for the development of artificial intelligence algorithms capable of effectively discriminating these activities, it is of great importance to have repositories of movements that allow the scientific community to train, evaluate, and benchmark new proposals of movement detectors. The UMAHand dataset offers a collection of files containing the signals captured by a Shimmer 3 sensor node, which includes an accelerometer, a gyroscope, a magnetometer and a barometer, during the execution of different typical hand movements. For that purpose, the measurements from these four sensors, gathered at a sampling rate of 100 Hz, were taken from a group of 25 volunteers (16 females and 9 males), aged between 18 and 56, during the performance of 29 daily life activities involving hand mobility. Participants wore the sensor node on their dominant hand throughout the experiments.

15.
Data Brief ; 55: 110621, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39006348

RESUMEN

Timed Up and Go (TUG) test is one of the most popular clinical tools aimed at the assessment of functional mobility and fall risk in older adults. The automation of the analysis of TUG movements is of great medical interest not only to speed up the test but also to maximize the information inferred from the subjects under study. In this context, this article describes a dataset collected from a cohort of 69 experimental subjects (including 30 adults over 60 years), during the execution of several repetitions of the TUG test. In particular, the dataset includes the measurements gathered with four wearables devices embedding four sensors (accelerometer, gyroscope magnetometer and barometer) located on four body locations (waist, wrist, ankle and chest). As a particularity, the dataset also includes the same measurements recorded when the young subjects repeat the test while wearing a commercial geriatric simulator, consisting of a set of weighted vests and other elements intended to replicate the limitations caused by aging. Thus, the generated dataset also enables the investigation into the potential of such tools to emulate the actual dynamics of older individuals.

16.
Int J Behav Nutr Phys Act ; 21(1): 77, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39020353

RESUMEN

BACKGROUND: The more accurate we can assess human physical behaviour in free-living conditions the better we can understand its relationship with health and wellbeing. Thigh-worn accelerometry can be used to identify basic activity types as well as different postures with high accuracy. User-friendly software without the need for specialized programming may support the adoption of this method. This study aims to evaluate the classification accuracy of two novel no-code classification methods, namely SENS motion and ActiPASS. METHODS: A sample of 38 healthy adults (30.8 ± 9.6 years; 53% female) wore the SENS motion accelerometer (12.5 Hz; ±4 g) on their thigh during various physical activities. Participants completed standardized activities with varying intensities in the laboratory. Activities included walking, running, cycling, sitting, standing, and lying down. Subsequently, participants performed unrestricted free-living activities outside of the laboratory while being video-recorded with a chest-mounted camera. Videos were annotated using a predefined labelling scheme and annotations served as a reference for the free-living condition. Classification output from the SENS motion software and ActiPASS software was compared to reference labels. RESULTS: A total of 63.6 h of activity data were analysed. We observed a high level of agreement between the two classification algorithms and their respective references in both conditions. In the free-living condition, Cohen's kappa coefficients were 0.86 for SENS and 0.92 for ActiPASS. The mean balanced accuracy ranged from 0.81 (cycling) to 0.99 (running) for SENS and from 0.92 (walking) to 0.99 (sedentary) for ActiPASS across all activity types. CONCLUSIONS: The study shows that two available no-code classification methods can be used to accurately identify basic physical activity types and postures. Our results highlight the accuracy of both methods based on relatively low sampling frequency data. The classification methods showed differences in performance, with lower sensitivity observed in free-living cycling (SENS) and slow treadmill walking (ActiPASS). Both methods use different sets of activity classes with varying definitions, which may explain the observed differences. Our results support the use of the SENS motion system and both no-code classification methods.


Asunto(s)
Acelerometría , Ejercicio Físico , Muslo , Caminata , Humanos , Femenino , Masculino , Adulto , Acelerometría/métodos , Ejercicio Físico/fisiología , Caminata/fisiología , Adulto Joven , Algoritmos , Programas Informáticos , Carrera/fisiología , Ciclismo/fisiología , Postura
17.
Heliyon ; 10(13): e33295, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39027497

RESUMEN

Study objectives: To develop a non-invasive and practical wearable method for long-term tracking of infants' sleep. Methods: An infant wearable, NAPping PAnts (NAPPA), was constructed by combining a diaper cover and a movement sensor (triaxial accelerometer and gyroscope), allowing either real-time data streaming to mobile devices or offline feature computation stored in the sensor memory. A sleep state classifier (wake, N1/REM, N2/N3) was trained and tested for NAPPA recordings (N = 16649 epochs of 30 s), using hypnograms from co-registered polysomnography (PSG) as a training target in 33 infants (age 2 weeks to 18 months; Mean = 4). User experience was assessed from an additional group of 16 parents. Results: Overnight NAPPA recordings were successfully performed in all infants. The sleep state classifier showed good overall accuracy (78 %; Range 74-83 %) when using a combination of five features related to movement and respiration. Sleep depth trends were generated from the classifier outputs to visualise sleep state fluctuations, which closely aligned with PSG-derived hypnograms in all infants. Consistently positive parental feedback affirmed the effectiveness of the NAPPA-design. Conclusions: NAPPA offers a practical and feasible method for out-of-hospital assessment of infants' sleep behaviour. It can directly support large-scale quantitative studies and development of new paradigms in scientific research and infant healthcare. Moreover, NAPPA provides accurate and informative computational measures for body positions, respiration rates, and activity levels, each with their respective clinical and behavioural value.

18.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38931675

RESUMEN

Human Activity Recognition (HAR) plays an important role in the automation of various tasks related to activity tracking in such areas as healthcare and eldercare (telerehabilitation, telemonitoring), security, ergonomics, entertainment (fitness, sports promotion, human-computer interaction, video games), and intelligent environments. This paper tackles the problem of real-time recognition and repetition counting of 12 types of exercises performed during athletic workouts. Our approach is based on the deep neural network model fed by the signal from a 9-axis motion sensor (IMU) placed on the chest. The model can be run on mobile platforms (iOS, Android). We discuss design requirements for the system and their impact on data collection protocols. We present architecture based on an encoder pretrained with contrastive learning. Compared to end-to-end training, the presented approach significantly improves the developed model's quality in terms of accuracy (F1 score, MAPE) and robustness (false-positive rate) during background activity. We make the AIDLAB-HAR dataset publicly available to encourage further research.


Asunto(s)
Actividades Humanas , Redes Neurales de la Computación , Telemedicina , Humanos , Ejercicio Físico/fisiología , Algoritmos
19.
Sensors (Basel) ; 24(12)2024 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-38931682

RESUMEN

Monitoring activities of daily living (ADLs) plays an important role in measuring and responding to a person's ability to manage their basic physical needs. Effective recognition systems for monitoring ADLs must successfully recognize naturalistic activities that also realistically occur at infrequent intervals. However, existing systems primarily focus on either recognizing more separable, controlled activity types or are trained on balanced datasets where activities occur more frequently. In our work, we investigate the challenges associated with applying machine learning to an imbalanced dataset collected from a fully in-the-wild environment. This analysis shows that the combination of preprocessing techniques to increase recall and postprocessing techniques to increase precision can result in more desirable models for tasks such as ADL monitoring. In a user-independent evaluation using in-the-wild data, these techniques resulted in a model that achieved an event-based F1-score of over 0.9 for brushing teeth, combing hair, walking, and washing hands. This work tackles fundamental challenges in machine learning that will need to be addressed in order for these systems to be deployed and reliably work in the real world.


Asunto(s)
Actividades Cotidianas , Actividades Humanas , Aprendizaje Automático , Humanos , Algoritmos , Caminata/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos
20.
Sensors (Basel) ; 24(12)2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38931728

RESUMEN

There has been a resurgence of applications focused on human activity recognition (HAR) in smart homes, especially in the field of ambient intelligence and assisted-living technologies. However, such applications present numerous significant challenges to any automated analysis system operating in the real world, such as variability, sparsity, and noise in sensor measurements. Although state-of-the-art HAR systems have made considerable strides in addressing some of these challenges, they suffer from a practical limitation: they require successful pre-segmentation of continuous sensor data streams prior to automated recognition, i.e., they assume that an oracle is present during deployment, and that it is capable of identifying time windows of interest across discrete sensor events. To overcome this limitation, we propose a novel graph-guided neural network approach that performs activity recognition by learning explicit co-firing relationships between sensors. We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home in a data-driven manner. Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms and hierarchical pooling of node embeddings. We demonstrate the effectiveness of our proposed approach by conducting several experiments on CASAS datasets, showing that the resulting graph-guided neural network outperforms the state-of-the-art method for HAR in smart homes across multiple datasets and by large margins. These results are promising because they push HAR for smart homes closer to real-world applications.


Asunto(s)
Actividades Humanas , Redes Neurales de la Computación , Humanos , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA