Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Soft Robot ; 9(3): 473-485, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34415805

RESUMEN

We introduce a novel in-home hand rehabilitation system for monitoring hand motions and assessing grip forces of stroke patients. The overall system is composed of a sensing device and a computer vision system. The sensing device is a lightweight cylindrical object for easy grip and manipulation, which is covered by a passive sensing layer called "Smart Skin." The Smart Skin is fabricated using soft silicone elastomer, which contains embedded microchannels partially filled with colored fluid. When the Smart Skin is compressed by grip forces, the colored fluid rises and fills in the top surface display area. Then, the computer vision system captures the image of the display area through a red-green-blue camera, detects the length change of the liquid through image processing, and eventually maps the liquid length to the calibrated force for estimating the gripping force. The passive sensing mechanism of the proposed Smart Skin device works in conjunction with a single camera setup, making the system simple and easy to use, while also requiring minimum maintenance effort. Our system, on one hand, aims to support home-based rehabilitation therapy with minimal or no supervision by recording the training process and the force data, which can be automatically conveyed to physical therapists. In contrast, the therapists can also remotely instruct the patients with their training prescriptions through online videos. This study first describes the design, fabrication, and calibration of the Smart Skin, and the algorithm for image processing, and then presents experimental results from the integrated system. The Smart Skin prototype shows a relatively linear relationship between the applied force and the length change of the liquid in the range of 0-35 N. The computer vision system shows the estimation error <4% and a relatively high stability in estimation under different hand motions.


Asunto(s)
Mano , Dispositivos Ópticos , Fuerza de la Mano , Humanos , Movimiento (Física) , Presión
2.
Front Robot AI ; 8: 720319, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35155586

RESUMEN

As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user's goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user's behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance.

3.
IEEE Trans Pattern Anal Mach Intell ; 42(2): 304-317, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-30295615

RESUMEN

We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, Darko, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. Darko learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas Darko discovers the transitions, rewards, and goals of a user from streaming data. Among other results, we show Darko forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret.

4.
Bioinformatics ; 35(14): i260-i268, 2019 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-31510673

RESUMEN

MOTIVATION: Since 2017, an increasing amount of attention has been paid to the supervised deep learning-based macromolecule in situ structural classification (i.e. subtomogram classification) in cellular electron cryo-tomography (CECT) due to the substantially higher scalability of deep learning. However, the success of such supervised approach relies heavily on the availability of large amounts of labeled training data. For CECT, creating valid training data from the same data source as prediction data is usually laborious and computationally intensive. It would be beneficial to have training data from a separate data source where the annotation is readily available or can be performed in a high-throughput fashion. However, the cross data source prediction is often biased due to the different image intensity distributions (a.k.a. domain shift). RESULTS: We adapt a deep learning-based adversarial domain adaptation (3D-ADA) method to timely address the domain shift problem in CECT data analysis. 3D-ADA first uses a source domain feature extractor to extract discriminative features from the training data as the input to a classifier. Then it adversarially trains a target domain feature extractor to reduce the distribution differences of the extracted features between training and prediction data. As a result, the same classifier can be directly applied to the prediction data. We tested 3D-ADA on both experimental and realistically simulated subtomogram datasets under different imaging conditions. 3D-ADA stably improved the cross data source prediction, as well as outperformed two popular domain adaptation methods. Furthermore, we demonstrate that 3D-ADA can improve cross data source recovery of novel macromolecular structures. AVAILABILITY AND IMPLEMENTATION: https://github.com/xulabs/projects. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Electrones , Almacenamiento y Recuperación de la Información , Tomografía con Microscopio Electrónico , Sustancias Macromoleculares , Estructura Molecular
5.
IEEE Trans Pattern Anal Mach Intell ; 40(11): 2749-2761, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-29990151

RESUMEN

We envision a future time when wearable cameras are worn by the masses and recording first-person point-of-view videos of everyday life. While these cameras can enable new assistive technologies and novel research challenges, they also raise serious privacy concerns. For example, first-person videos passively recorded by wearable cameras will necessarily include anyone who comes into the view of a camera-with or without consent. Motivated by these benefits and risks, we developed a self-search technique tailored to first-person videos. The key observation of our work is that the egocentric head motion of a target person (i.e., the self) is observed both in the point-of-view video of the target and observer. The motion correlation between the target person's video and the observer's video can then be used to identify instances of the self uniquely. We incorporate this feature into the proposed approach that computes the motion correlation over densely-sampled trajectories to search for a target individual in observer videos. Our approach significantly improves self-search performance over several well-known face detectors and recognizers. Furthermore, we show how our approach can enable several practical applications such as privacy filtering, target video retrieval, and social group clustering.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Grabación en Video , Dispositivos Electrónicos Vestibles , Bases de Datos Factuales , Femenino , Movimientos de la Cabeza , Humanos , Relaciones Interpersonales , Aprendizaje Automático , Masculino , Movimiento (Física)
6.
Artículo en Inglés | MEDLINE | ID: mdl-31448356

RESUMEN

'Turn slightly to the left' the navigational system announces, with the aim of directing a blind user to merge into a corridor. Yet, due to long reaction time, the user turns too late and proceeds into the wrong hallway. Observations of such user behavior in real-world navigation settings motivate us to study the manner in which blind users react to the instructional feedback of a turn-by-turn guidance system. We found little previous work analyzing the extent of the variability among blind users in reaction to different instructional guidance during assisted navigation. To gain insight into how navigational interfaces can be better designed to accommodate the information needs of different users, we conduct a data-driven analysis of reaction variability as defined by motion and timing measures. Based on continuously tracked user motion during real-world navigation with a deployed system, we find significant variability between users in their reaction characteristics. Specifically, the statistical analysis reveals significant variability during the crucial elements of the navigation (e.g., turning and encountering obstacles). With the end-user experience in mind, we identify the need to not only adjust interface timing and content to each user's personal walking pace, but also their individual navigation skill and style. The design implications of our study inform the development of assistive systems which consider such user-specific behavior to ensure successful navigation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA