Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38015694

RESUMEN

Vision Training is important for basketball players to effectively search for teammates who has wide-open opportunities to shoot, observe the defenders around the wide-open teammates and quickly choose a proper way to pass the ball to the most suitable one. We develop an immersive virtual reality (VR) system called VisionCoach to simulate the player's viewing perspective and generate three designed systematic vision training tasks to benefit the cultivating procedure. By recording the player's eye gazing and dribbling video sequence, the proposed system can analyze the vision-related behavior to understand the training effectiveness. To demonstrate the proposed VR training system can facilitate the cultivation of vision ability, we recruited 14 experienced players to participate in a 6-week between-subject study, and conducted a study by comparing the most frequently used 2D vision training method called Vision Performance Enhancement (VPE) program with the proposed system. Qualitative experiences and quantitative training results are reported to show that the proposed immersive VR training system can effectively improve player's vision ability in terms of gaze behavior and dribbling stability. Furthermore, training in the VR-VisionCoach Condition can transfer the learned abilities to real scenario more easily than training in the 2D-VPE Condition.

2.
IEEE Trans Cybern ; 52(5): 3172-3183, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-32776885

RESUMEN

To cultivate professional sports referees, we develop a sports referee training system, which can recognize whether a trainee wearing the Myo armband makes correct judging signals while watching a prerecorded professional game. The system has to correctly recognize a set of gestures related to official referee's signals (ORSs) and another set of gestures used to intuitively interact with the system. These two gesture sets involve both large motion and subtle motion gestures, and the existing sensor-based methods using handcrafted features do not work well on recognizing all kinds of these gestures. In this work, deep belief networks (DBNs) are utilized to learn more representative features for hand gesture recognition, and selective handcrafted features are combined with the DBN features to achieve more robust recognition results. Moreover, a hierarchical recognition scheme is designed to first recognize the input gesture as a large or subtle motion gesture, and the corresponding classifiers for large motion gestures and subtle motion gestures are further used to obtain the final recognition result. Moreover, the Myo armband consists of eight-channel surface electromyography (sEMG) sensors and an inertial measurement unit (IMU), and these heterogeneous signals can be fused to achieve better recognition accuracy. We take basketball as an example to validate the proposed training system, and the experimental results show that the proposed hierarchical scheme considering DBN features of multimodality data outperforms other methods.


Asunto(s)
Gestos , Extremidad Superior , Acelerometría , Algoritmos , Electromiografía , Mano , Aprendizaje
3.
IEEE Trans Vis Comput Graph ; 28(8): 2970-2982, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-33351762

RESUMEN

In this article, a VR-based basketball training system comprising a standalone VR device and a tablet is proposed. The system is intended to improve the ability of players to understand offensive tactics and practice these tactics correctly. We compare the training effectiveness of various degrees of immersion, including a conventional basketball tactic board, a 2D monitor, and virtual reality. A multi-camera-based human tracking system was designed and built around a real-world basketball court to record and analyze the running trajectory of each player during tactical execution. The accuracy of the running path and hesitation time at each tactical step were evaluated for each participant. Furthermore, we assessed several subjective measurements, including simulator sickness, presence, and sport imagery ability, to conduct a more comprehensive exploration of the feasibility of the proposed VR framework for basketball tactics training. The results indicate that the proposed system is useful for learning complex tactics. Furthermore, high VR immersion training improves athletes' abilities with regards to strategic imagery.


Asunto(s)
Baloncesto , Realidad Virtual , Gráficos por Computador , Estudios de Factibilidad , Humanos , Aprendizaje
4.
Comput Methods Programs Biomed ; 174: 51-64, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29307471

RESUMEN

Tongue features are important objective basis for clinical diagnosis and treatment in both western medicine and Chinese medicine. The need for continuous monitoring of health conditions inspires us to develop an automatic tongue diagnosis system based on built-in sensors of smartphones. However, tongue images taken by smartphone are quite different in color due to various lighting conditions, and it consequently affects the diagnosis especially when we use the appearance of tongue fur to infer health conditions. In this paper, we captured paired tongue images with and without flash, and the color difference between the paired images is used to estimate the lighting condition based on the Support Vector Machine (SVM). The color correction matrices for three kinds of common lights (i.e., fluorescent, halogen and incandescent) are pre-trained by using a ColorChecker-based method, and the corresponding pre-trained matrix for the estimated lighting is then applied to eliminate the effect of color distortion. We further use tongue fur detection as an example to discuss the effect of different model parameters and ColorCheckers for training the tongue color correction matrix under different lighting conditions. Finally, in order to demonstrate the potential use of our proposed system, we recruited 246 patients over a period of 2.5 years from a local hospital in Taiwan and examined the correlations between the captured tongue features and alanine aminotransferase (ALT)/aspartate aminotransferase (AST), which are important bio-markers for liver diseases. We found that some tongue features have strong correlation with AST or ALT, which suggests the possible use of these tongue features captured on a smartphone to provide an early warning of liver diseases.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Medicina Tradicional China/métodos , Teléfono Inteligente , Máquina de Vectores de Soporte , Lengua/fisiopatología , Algoritmos , Color , Diagnóstico por Computador/métodos , Diseño de Equipo , Humanos , Iluminación , Hepatopatías/diagnóstico , Hepatopatías/fisiopatología , Taiwán , Temperatura
5.
J Med Syst ; 40(1): 18, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26525056

RESUMEN

BACKGROUND: An automatic tongue diagnosis framework is proposed to analyze tongue images taken by smartphones. Different from conventional tongue diagnosis systems, our input tongue images are usually in low resolution and taken under unknown lighting conditions. Consequently, existing tongue diagnosis methods cannot be directly applied to give accurate results. MATERIALS AND METHODS: We use the SVM (support vector machine) to predict the lighting condition and the corresponding color correction matrix according to the color difference of images taken with and without flash. We also modify the state-of-the-art work of fur and fissure detection for tongue images by taking hue information into consideration and adding a denoising step. RESULTS: Our method is able to correct the color of tongue images under different lighting conditions (e.g. fluorescent, incandescent, and halogen illuminant) and provide a better accuracy in tongue features detection with less processing complexity than the prior work. CONCLUSIONS: In this work, we proposed an automatic tongue diagnosis framework which can be applied to smartphones. Unlike the prior work which can only work in a controlled environment, our system can adapt to different lighting conditions by employing a novel color correction parameter estimation scheme.


Asunto(s)
Color , Aumento de la Imagen/instrumentación , Medicina Tradicional China/instrumentación , Teléfono Inteligente , Máquina de Vectores de Soporte , Lengua/fisiopatología , Humanos , Iluminación , Análisis de Regresión
6.
IEEE Trans Cybern ; 45(4): 742-53, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25069133

RESUMEN

The difficulty of vision-based posture estimation is greatly decreased with the aid of commercial depth camera, such as Microsoft Kinect. However, there is still much to do to bridge the results of human posture estimation and the understanding of human movements. Human movement assessment is an important technique for exercise learning in the field of healthcare. In this paper, we propose an action tutor system which enables the user to interactively retrieve a learning exemplar of the target action movement and to immediately acquire motion instructions while learning it in front of the Kinect. The proposed system is composed of two stages. In the retrieval stage, nonlinear time warping algorithms are designed to retrieve video segments similar to the query movement roughly performed by the user. In the learning stage, the user learns according to the selected video exemplar, and the motion assessment including both static and dynamic differences is presented to the user in a more effective and organized way, helping him/her to perform the action movement correctly. The experiments are conducted on the videos of ten action types, and the results show that the proposed human action descriptor is representative for action video retrieval and the tutor system can effectively help the user while learning action movements.


Asunto(s)
Actigrafía/instrumentación , Actigrafía/métodos , Actividad Motora/fisiología , Movimiento/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Juegos de Video , Algoritmos , Sistemas de Computación , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Transductores
7.
IEEE Trans Image Process ; 23(3): 1047-59, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24474374

RESUMEN

Camera-enabled mobile devices are commonly used as interaction platforms for linking the user's virtual and physical worlds in numerous research and commercial applications, such as serving an augmented reality interface for mobile information retrieval. The various application scenarios give rise to a key technique of daily life visual object recognition. On-premise signs (OPSs), a popular form of commercial advertising, are widely used in our living life. The OPSs often exhibit great visual diversity (e.g., appearing in arbitrary size), accompanied with complex environmental conditions (e.g., foreground and background clutter). Observing that such real-world characteristics are lacking in most of the existing image data sets, in this paper, we first proposed an OPS data set, namely OPS-62, in which totally 4649 OPS images of 62 different businesses are collected from Google's Street View. Further, for addressing the problem of real-world OPS learning and recognition, we developed a probabilistic framework based on the distributional clustering, in which we proposed to exploit the distributional information of each visual feature (the distribution of its associated OPS labels) as a reliable selection criterion for building discriminative OPS models. Experiments on the OPS-62 data set demonstrated the outperformance of our approach over the state-of-the-art probabilistic latent semantic analysis models for more accurate recognitions and less false alarms, with a significant 151.28% relative improvement in the average recognition rate. Meanwhile, our approach is simple, linear, and can be executed in a parallel fashion, making it practical and scalable for large-scale multimedia applications.


Asunto(s)
Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Directorios de Señalización y Ubicación , Procesamiento de Lenguaje Natural , Reconocimiento de Normas Patrones Automatizadas/métodos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA