Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(12)2022 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-35746384

RESUMEN

Many authors have been working on approaches that can be applied to social robots to allow a more realistic/comfortable relationship between humans and robots in the same space. This paper proposes a new navigation strategy for social environments by recognizing and considering the social conventions of people and groups. To achieve that, we proposed the application of Delaunay triangulation for connecting people as vertices of a triangle network. Then, we defined a complete asymmetric Gaussian function (for individuals and groups) to decide zones where the robot must avoid passing. Furthermore, a feature generalization scheme called socialization feature was proposed to incorporate perception information that can be used to change the variance of the Gaussian function. Simulation results have been presented to demonstrate that the proposed approach can modify the path according to the perception of the robot compared to a standard A* algorithm.


Asunto(s)
Robótica , Algoritmos , Simulación por Computador , Humanos , Distribución Normal , Robótica/métodos , Interacción Social
2.
Sensors (Basel) ; 22(10)2022 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-35632160

RESUMEN

Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human-Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively.


Asunto(s)
Robótica , Emociones , Humanos , Especies Reactivas de Oxígeno , Robótica/métodos , Interacción Social , Percepción Social
3.
Sensors (Basel) ; 21(4)2021 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-33668412

RESUMEN

For social robots, knowledge regarding human emotional states is an essential part of adapting their behavior or associating emotions to other entities. Robots gather the information from which emotion detection is processed via different media, such as text, speech, images, or videos. The multimedia content is then properly processed to recognize emotions/sentiments, for example, by analyzing faces and postures in images/videos based on machine learning techniques or by converting speech into text to perform emotion detection with natural language processing (NLP) techniques. Keeping this information in semantic repositories offers a wide range of possibilities for implementing smart applications. We propose a framework to allow social robots to detect emotions and to store this information in a semantic repository, based on EMONTO (an EMotion ONTOlogy), and in the first figure or table caption. Please define if appropriate. an ontology to represent emotions. As a proof-of-concept, we develop a first version of this framework focused on emotion detection in text, which can be obtained directly as text or by converting speech to text. We tested the implementation with a case study of tour-guide robots for museums that rely on a speech-to-text converter based on the Google Application Programming Interface (API) and a Python library, a neural network to label the emotions in texts based on NLP transformers, and EMONTO integrated with an ontology for museums; thus, it is possible to register the emotions that artworks produce in visitors. We evaluate the classification model, obtaining equivalent results compared with a state-of-the-art transformer-based model and with a clear roadmap for improvement.


Asunto(s)
Procesamiento de Lenguaje Natural , Robótica , Emociones , Humanos , Semántica , Habla
4.
Micromachines (Basel) ; 12(2)2021 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-33668527

RESUMEN

Nowadays, mobile robots are playing an important role in different areas of science, industry, academia and even in everyday life. In this sense, their abilities and behaviours become increasingly complex. In particular, in indoor environments, such as hospitals, schools, banks and museums, where the robot coincides with people and other robots, its movement and navigation must be programmed and adapted to robot-robot and human-robot interactions. However, existing approaches are focused either on multi-robot navigation (robot-robot interaction) or social navigation with human presence (human-robot interaction), neglecting the integration of both approaches. Proxemic interaction is recently being used in this domain of research, to improve Human-Robot Interaction (HRI). In this context, we propose an autonomous navigation approach for mobile robots in indoor environments, based on the principles of proxemic theory, integrated with classical navigation algorithms, such as ORCA, Social Momentum, and A*. With this novel approach, the mobile robot adapts its behaviour, by analysing the proximity of people to each other, with respect to it, and with respect to other robots to decide and plan its respective navigation, while showing acceptable social behaviours in presence of humans. We describe our proposed approach and show how proxemics and the classical navigation algorithms are combined to provide an effective navigation, while respecting social human distances. To show the suitability of our approach, we simulate several situations of coexistence of robots and humans, demonstrating an effective social navigation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA