Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39255105

RESUMEN

We explored the impact of depth of knowledge on conversational agents and human perceptions in a virtual reality (VR) environment. We designed experimental conditions with low, medium, and high depths of knowledge in the domain of game development and tested them among 27 game development students. We aimed to understand how the agent's predefined knowledge levels affected the participants' perceptions of the agent and its knowledge. Our findings showed that participants could distinguish between different knowledge levels of the virtual agent. Moreover, the agent's depth of knowledge significantly impacted participants' perceptions of intelligence, rapport, factuality, the uncanny valley effect, anthropomorphism, and willingness for future interaction. We also found strong correlations between perceived knowledge, perceived intelligence, factuality, and willingness for future interactions. We developed design guidelines for creating conversational agents from our data and observations. This study contributes to the human-agent interaction field in VR settings by providing empirical evidence on the importance of tailoring virtual agents' depth of knowledge to improve user experience, offering insights into designing more engaging and effective conversational agents.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38236686

RESUMEN

We introduce a novel co-design method for autonomous moving agents' shape attributes and locomotion by combining deep reinforcement learning and evolution with user control. Our main inspiration comes from evolution, which has led to wide variability and adaptation in Nature and has significantly improved design and behavior simultaneously. Our method takes an input agent with optional user-defined constraints, such as leg parts that should not evolve or are only within the allowed ranges of changes. It uses physics-based simulation to determine its locomotion and finds a behavior policy for the input design that is used as a baseline for comparison. The agent is randomly modified within the allowed ranges, creating a new generation of several hundred agents. The generation is trained by transferring the previous policy, which significantly speeds up the training. The best-performing agents are selected, and a new generation is formed using their crossover and mutations. The next generations are then trained until satisfactory results are reached. We show a wide variety of evolved agents, and our results show that even with only 10 the overall performance of the evolved agents improves by 50 experiments' performance will improve even more to 150 structures, and it does not require considerable computation resources as it works on a single GPU and provides results by training thousands of agents within 30 minutes.

3.
Artículo en Inglés | MEDLINE | ID: mdl-37293199

RESUMEN

The use of virtual reality (VR) in laboratory skill training is rapidly increasing. In such applications, users often need to explore a large virtual environment within a limited physical space while completing a series of hand-based tasks (e.g., object manipulation). However, the most widely used controller-based teleport methods may conflict with the users' hand operation and result in a higher cognitive load, negatively affecting their training experiences. To alleviate these limitations, we designed and implemented a locomotion method called ManiLoco to enable hands-free interaction and thus avoid conflicts and interruptions from other tasks. Users can teleport to a remote object's position by taking a step toward the object while looking at it. We evaluated ManiLoco and compared it with state-of-the-art Point & Teleport in a within-subject experiment with 16 participants. The results confirmed the viability of our foot- and head-based approach and better support concurrent object manipulation in VR training tasks. Furthermore, our locomotion method does not require any additional hardware. It solely relies on the VR head-mounted display (HMD) and our implementation of detecting the user's stepping activity, and it can be easily applied to any VR application as a plugin.

4.
Front Hum Neurosci ; 16: 883467, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36034123

RESUMEN

Although interest in brain-computer interfaces (BCIs) from researchers and consumers continues to increase, many BCIs lack the complexity and imaginative properties thought to guide users toward successful brain activity modulation. We investigate the possibility of using a complex BCI by developing an experimental story environment with which users interact through cognitive thought strategies. In our system, the user's frontal alpha asymmetry (FAA) measured with electroencephalography (EEG) is linearly mapped to the color saturation of the main character in the story. We implemented a user-friendly experimental design using a comfortable EEG device and short neurofeedback (NF) training protocol. In our system, seven out of 19 participants successfully increased FAA during the course of the study, for a total of ten successful blocks out of 152. We detail our results concerning left and right prefrontal cortical activity contributions to FAA in both successful and unsuccessful story blocks. Additionally, we examine inter-subject correlations of EEG data, and self-reported questionnaire data to understand the user experience of BCI interaction. Results suggest the potential of imaginative story BCI environments for engaging users and allowing for FAA modulation. Our data suggests new research directions for BCIs investigating emotion and motivation through FAA.

5.
Behav Sci (Basel) ; 10(9)2020 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-32867234

RESUMEN

This paper describes our investigation on how participants coordinate movement behavior in relation to a virtual crowd that surrounds them while immersed in a virtual environment. The participants were immersed in a virtual metropolitan city and were instructed to cross the road and reach the opposite sidewalk. The participants performed the task ten times. The virtual crowd that surrounded them was scripted to move in the same direction. During the experiment, several measurements were obtained to evaluate human movement coordination. Moreover, the time and direction in which the participants started moving toward the opposite sidewalk were also captured. These data were later used to initialize the parameters of simulated characters that were scripted to become part of the virtual crowd. Measurements were extracted from the simulated characters and used as a baseline to evaluate the movement coordination of the participants. By analyzing the data, significant differences between the movement behaviors of the participants and the simulated characters were found. However, simple linear regression analyses indicated that the movement behavior of participants was moderately associated with the simulated characters' movements when performing a locomotive task within a virtual crowd population. This study can be considered as a baseline for further research that evaluates the movement coordination of participants during human-virtual-crowd interactions using measurements obtained by the simulated characters.

6.
Sensors (Basel) ; 17(11)2017 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-29125534

RESUMEN

This paper presents a method of reconstructing full-body locomotion sequences for virtual characters in real-time, using data from a single inertial measurement unit (IMU). This process can be characterized by its difficulty because of the need to reconstruct a high number of degrees of freedom (DOFs) from a very low number of DOFs. To solve such a complex problem, the presented method is divided into several steps. The user's full-body locomotion and the IMU's data are recorded simultaneously. Then, the data is preprocessed in such a way that would be handled more efficiently. By developing a hierarchical multivariate hidden Markov model with reactive interpolation functionality the system learns the structure of the motion sequences. Specifically, the phases of the locomotion sequence are assigned in the higher hierarchical level, and the frame structure of the motion sequences are assigned at the lower hierarchical level. During the runtime of the method, the forward algorithm is used for reconstructing the full-body motion of a virtual character. Firstly, the method predicts the phase where the input motion belongs (higher hierarchical level). Secondly, the method predicts the closest trajectories and their progression and interpolates the most probable of them to reconstruct the virtual character's full-body motion (lower hierarchical level). Evaluating the proposed method shows that it works on reasonable framerates and minimizes the reconstruction errors compared with previous approaches.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA