Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(16)2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39204865

RESUMEN

Some of the barriers preventing virtual reality (VR) from being widely adopted are the cost and unfamiliarity of VR systems. Here, we propose that in many cases, the specialized controllers shipped with most VR head-mounted displays can be replaced by a regular smartphone, cutting the cost of the system, and allowing users to interact in VR using a device they are already familiar with. To achieve this, we developed SmartVR Pointer, an approach that uses smartphones to replace the specialized controllers for two essential operations in VR: selection and navigation by teleporting. In SmartVR Pointer, a camera mounted on the head-mounted display (HMD) is tilted downwards so that it points to where the user will naturally be holding their phone in front of them. SmartVR Pointer supports three selection modalities: tracker based, gaze based, and combined/hybrid. In the tracker-based SmartVR Pointer selection, we use image-based tracking to track a QR code displayed on the phone screen and then map the phone's position to a pointer shown within the field of view of the camera in the virtual environment. In the gaze-based selection modality, the user controls the pointer using their gaze and taps on the phone for selection. The combined technique is a hybrid between gaze-based interaction in VR and tracker-based Augmented Reality. It allows the user to control a VR pointer that looks and behaves like a mouse pointer by moving their smartphone to select objects within the virtual environment, and to interact with the selected objects using the smartphone's touch screen. The touchscreen is used for selection and dragging. The SmartVR Pointer is simple and requires no calibration and no complex hardware assembly or disassembly. We demonstrate successful interactive applications of SmartVR Pointer in a VR environment with a demo where the user navigates in the virtual environment using teleportation points on the floor and then solves a Tetris-style key-and-lock challenge.

2.
Mov Ecol ; 11(1): 27, 2023 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-37194049

RESUMEN

Movement facilitates and alters species interactions, the resulting food web structures, species distribution patterns, community structures and survival of populations and communities. In the light of global change, it is crucial to gain a general understanding of how movement depends on traits and environmental conditions. Although insects and notably Coleoptera represent the largest and a functionally important taxonomic group, we still know little about their general movement capacities and how they respond to warming. Here, we measured the exploratory speed of 125 individuals of eight carabid beetle species across different temperatures and body masses using automated image-based tracking. The resulting data revealed a power-law scaling relationship of average movement speed with body mass. By additionally fitting a thermal performance curve to the data, we accounted for the unimodal temperature response of movement speed. Thereby, we yielded a general allometric and thermodynamic equation to predict exploratory speed from temperature and body mass. This equation predicting temperature-dependent movement speed can be incorporated into modeling approaches to predict trophic interactions or spatial movement patterns. Overall, these findings will help improve our understanding of how temperature effects on movement cascade from small to large spatial scales as well as from individual to population fitness and survival across communities.

3.
Ecol Evol ; 13(3): e9902, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37006889

RESUMEN

Automated 3D image-based tracking systems are new and promising devices to investigate the foraging behavior of flying animals with great accuracy and precision. 3D analyses can provide accurate assessments of flight performance in regard to speed, curvature, and hovering. However, there have been few applications of this technology in ecology, particularly for insects. We used this technology to analyze the behavioral interactions between the Western honey bee Apis mellifera and its invasive predator the Asian hornet, Vespa velutina nigrithorax. We investigated whether predation success could be affected by flight speed, flight curvature, and hovering of the Asian hornet and honey bees in front of one beehive. We recorded a total of 603,259 flight trajectories and 5175 predator-prey flight interactions leading to 126 successful predation events, representing 2.4% predation success. Flight speeds of hornets in front of hive entrances were much lower than that of their bee prey; in contrast to hovering capacity, while curvature range overlapped between the two species. There were large differences in speed, curvature, and hovering between the exit and entrance flights of honey bees. Interestingly, we found hornet density affected flight performance of both honey bees and hornets. Higher hornet density led to a decrease in the speed of honey bees leaving the hive, and an increase in the speed of honey bees entering the hive, together with more curved flight trajectories. These effects suggest some predator avoidance behavior by the bees. Higher honey bee flight curvature resulted in lower hornet predation success. Results showed an increase in predation success when hornet number increased up to 8 individuals, above which predation success decreased, likely due to competition among predators. Although based on a single colony, this study reveals interesting outcomes derived from the use of automated 3D tracking to derive accurate measures of individual behavior and behavioral interactions among flying species.

4.
Radiother Oncol ; 123(1): 78-84, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28245908

RESUMEN

PURPOSE: The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. METHODS: Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. RESULTS: A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). CONCLUSION: Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH.


Asunto(s)
Contencion de la Respiración , Carcinoma de Pulmón de Células no Pequeñas/radioterapia , Neoplasias Pulmonares/radioterapia , Incertidumbre , Femenino , Fluoroscopía , Humanos , Masculino
5.
Eur J Vasc Endovasc Surg ; 52(3): 323-31, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27389943

RESUMEN

OBJECTIVES: Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. METHOD: Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. RESULTS: IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. CONCLUSION: In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Humanos , Cirugía Asistida por Computador
6.
Int J Med Robot ; 11(1): 67-79, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24623371

RESUMEN

BACKGROUND: Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. METHODS: The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. RESULTS: Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. CONCLUSION: The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing.


Asunto(s)
Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Tomografía/métodos , Algoritmos , Fluoroscopía/métodos , Humanos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Encuestas y Cuestionarios , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos
7.
Trends Ecol Evol ; 29(7): 417-28, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24908439

RESUMEN

The behavior of individuals determines the strength and outcome of ecological interactions, which drive population, community, and ecosystem organization. Bio-logging, such as telemetry and animal-borne imaging, provides essential individual viewpoints, tracks, and life histories, but requires capture of individuals and is often impractical to scale. Recent developments in automated image-based tracking offers opportunities to remotely quantify and understand individual behavior at scales and resolutions not previously possible, providing an essential supplement to other tracking methodologies in ecology. Automated image-based tracking should continue to advance the field of ecology by enabling better understanding of the linkages between individual and higher-level ecological processes, via high-throughput quantitative analysis of complex ecological patterns and processes across scales, including analysis of environmental drivers.


Asunto(s)
Distribución Animal , Conducta Animal , Ecología/tendencias , Telemetría , Animales
8.
Int J Med Robot ; 10(3): 263-74, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24106103

RESUMEN

BACKGROUND: Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. METHODS: A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. RESULTS: Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. CONCLUSION: The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field.


Asunto(s)
Endoscopios , Endoscopía/instrumentación , Endoscopía/métodos , Robótica/métodos , Automatización , Procedimientos Quirúrgicos Cardíacos , Computadores , Diagnóstico por Imagen , Humanos , Laparoscopía/métodos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos , Cirugía Endoscópica por Orificios Naturales/métodos , Ortopedia , Programas Informáticos , Instrumentos Quirúrgicos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA