Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Radiat Oncol ; 19(1): 55, 2024 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-38735947

RESUMEN

BACKGROUND: Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS: A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS: The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION: The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.


Asunto(s)
Aprendizaje Profundo , Esófago , Humanos , Esófago/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
2.
Sensors (Basel) ; 24(7)2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38610495

RESUMEN

In mobile robotics, LASER scanners have a wide spectrum of indoor and outdoor applications, both in structured and unstructured environments, due to their accuracy and precision. Most works that use this sensor have their own data representation and their own case-specific modeling strategies, and no common formalism is adopted. To address this issue, this manuscript presents an analytical approach for the identification and localization of objects using 2D LiDARs. Our main contribution lies in formally defining LASER sensor measurements and their representation, the identification of objects, their main properties, and their location in a scene. We validate our proposal with experiments in generic semi-structured environments common in autonomous navigation, and we demonstrate its feasibility in multiple object detection and identification, strictly following its analytical representation. Finally, our proposal further encourages and facilitates the design, modeling, and implementation of other applications that use LASER scanners as a distance sensor.

3.
Biomed Eng Online ; 23(1): 25, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38419078

RESUMEN

BACKGROUND: The accurate detection of eyelid tumors is essential for effective treatment, but it can be challenging due to small and unevenly distributed lesions surrounded by irrelevant noise. Moreover, early symptoms of eyelid tumors are atypical, and some categories of eyelid tumors exhibit similar color and texture features, making it difficult to distinguish between benign and malignant eyelid tumors, particularly for ophthalmologists with limited clinical experience. METHODS: We propose a hybrid model, HM_ADET, for automatic detection of eyelid tumors, including YOLOv7_CNFG to locate eyelid tumors and vision transformer (ViT) to classify benign and malignant eyelid tumors. First, the ConvNeXt module with an inverted bottleneck layer in the backbone of YOLOv7_CNFG is employed to prevent information loss of small eyelid tumors. Then, the flexible rectified linear unit (FReLU) is applied to capture multi-scale features such as texture, edge, and shape, thereby improving the localization accuracy of eyelid tumors. In addition, considering the geometric center and area difference between the predicted box (PB) and the ground truth box (GT), the GIoU_loss was utilized to handle cases of eyelid tumors with varying shapes and irregular boundaries. Finally, the multi-head attention (MHA) module is applied in ViT to extract discriminative features of eyelid tumors for benign and malignant classification. RESULTS: Experimental results demonstrate that the HM_ADET model achieves excellent performance in the detection of eyelid tumors. In specific, YOLOv7_CNFG outperforms YOLOv7, with AP increasing from 0.763 to 0.893 on the internal test set and from 0.647 to 0.765 on the external test set. ViT achieves AUCs of 0.945 (95% CI 0.894-0.981) and 0.915 (95% CI 0.860-0.955) for the classification of benign and malignant tumors on the internal and external test sets, respectively. CONCLUSIONS: Our study provides a promising strategy for the automatic diagnosis of eyelid tumors, which could potentially improve patient outcomes and reduce healthcare costs.


Asunto(s)
Neoplasias de los Párpados , Humanos , Neoplasias de los Párpados/diagnóstico , Área Bajo la Curva , Costos de la Atención en Salud
4.
MethodsX ; 11: 102354, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37719921

RESUMEN

Tracking multiple objects being an important problem in video surveillance. In the research, we provide a BEBLID (Boosted Efficient Binary Local Image Descriptor feature)- based tracking system. BEBLID being quick binary descriptor that works similarly to ORB, SIFT, or SURF and requires little processing. BEBLID key points and their related descriptions for the objects are first generated from two neighboring frames. The best match is then found by computing the Hamming distance between these two-point sets. The following localization of the objects may then be deduced using the key points that match. At the same, the object detection being facilitated by YOLOv3. Combined efforts from the two i.e., BEBLID and YOLOv3 being utilized for precise localization of the multiple objects. Identification of the localization of objects over time leads to the tracking mechanism. The effectiveness of our tracking technology is evaluated using datasets representing actual surveillance scenarios. The outcomes of the experiments demonstrate the suggested approach being capable of accurately and successfully tracking objects.•We proposed a multiple object tracking algorithm based on Boosted Efficient Binary Local Image Descriptor feature.•This algorithm utilizes the competencies of BEBLID and YOLOv3 to effectively detect and track multiple objects.•We validated this algorithm by comparing the results with other state-of-art algorithms presented in the literature.

5.
Sensors (Basel) ; 23(8)2023 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-37112210

RESUMEN

Object localization is a sub-field of computer vision-based object recognition technology that identifies object classes and locations. Studies on safety management are still in their infancy, particularly those aimed at lowering occupational fatalities and accidents at indoor construction sites. In comparison to manual procedures, this study suggests an improved discriminative object localization (IDOL) algorithm to aid safety managers with visualization to improve indoor construction site safety management. The IDOL algorithm employs Grad-CAM visualization images from the EfficientNet-B7 classification network to automatically identify internal characteristics pertinent to the set of classes evaluated by the network model without the need for further annotation. To evaluate the performance of the presented algorithm in the study, localization accuracy in 2D coordinates and localization error in 3D coordinates of the IDOL algorithm and YOLOv5 object detection model, a leading object detection method in the current research area, are compared. The comparison findings demonstrate that the IDOL algorithm provides a higher localization accuracy with more precise coordinates than the YOLOv5 model over both 2D images and 3D point cloud coordinates. The results of the study indicate that the IDOL algorithm achieved improved localization performance over the existing YOLOv5 object detection model and, thus, is able to assist with visualization of indoor construction sites in order to enhance safety management.

6.
Sensors (Basel) ; 23(6)2023 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-36991909

RESUMEN

Three-dimensional (3D) real-time object detection and tracking is an important task in the case of autonomous vehicles and road and railway smart mobility, in order to allow them to analyze their environment for navigation and obstacle avoidance purposes. In this paper, we improve the efficiency of 3D monocular object detection by using dataset combination and knowledge distillation, and by creating a lightweight model. Firstly, we combine real and synthetic datasets to increase the diversity and richness of the training data. Then, we use knowledge distillation to transfer the knowledge from a large, pre-trained model to a smaller, lightweight model. Finally, we create a lightweight model by selecting the combinations of width, depth & resolution in order to reach a target complexity and computation time. Our experiments showed that using each method improves either the accuracy or the efficiency of our model with no significant drawbacks. Using all these approaches is especially useful for resource-constrained environments, such as self-driving cars and railway systems.

7.
J Imaging ; 9(3)2023 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-36976123

RESUMEN

In this work, a visual object detection and localization workflow integrated into a robotic platform is presented for the 6D pose estimation of objects with challenging characteristics in terms of weak texture, surface properties and symmetries. The workflow is used as part of a module for object pose estimation deployed to a mobile robotic platform that exploits the Robot Operating System (ROS) as middleware. The objects of interest aim to support robot grasping in the context of human-robot collaboration during car door assembly in industrial manufacturing environments. In addition to the special object properties, these environments are inherently characterised by cluttered background and unfavorable illumination conditions. For the purpose of this specific application, two different datasets were collected and annotated for training a learning-based method that extracts the object pose from a single frame. The first dataset was acquired in controlled laboratory conditions and the second in the actual indoor industrial environment. Different models were trained based on the individual datasets and a combination of them were further evaluated in a number of test sequences from the actual industrial environment. The qualitative and quantitative results demonstrate the potential of the presented method in relevant industrial applications.

8.
Front Med (Lausanne) ; 10: 1102510, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36926317

RESUMEN

Introduction: Visual processing deficits in Alzheimer's disease are associated with diminished functional independence. While environmental adaptations have been proposed to promote independence, recent guidance gives limited consideration to such deficits and offers conflicting recommendations for people with dementia. We evaluated the effects of clutter and color contrasts on performances of everyday actions in posterior cortical atrophy and memory-led typical Alzheimer's disease. Methods: 15 patients with posterior cortical atrophy, 11 with typical Alzheimer's disease and 16 healthy controls were asked to pick up a visible target object as part of two pilot repeated-measures investigations from a standing or seated position. Participants picked up the target within a controlled real-world setting under varying environmental conditions: with/without clutter, with/without color contrast cue and far/near target position. Task completion time was recorded using a target-mounted inertial measurement unit. Results: Across both experiments, difficulties locating a target object were apparent through patient groups taking an estimated 50-90% longer to pick up targets relative to controls. There was no evidence of effects of color contrast when locating objects from standing/seated positions and of any other environmental conditions from a standing position on completion time in any participant group. Locating objects, surrounded by five distractors rather than none, from a seated position was associated with a disproportionately greater effect on completion times in the posterior cortical atrophy group relative to the control or typical Alzheimer's disease groups. Smaller, not statistically significant but directionally consistent, ratios of relative effects were seen for two distractors compared with none. Discussion: Findings are consistent with inefficient object localization in posterior cortical atrophy relative to typical Alzheimer's disease and control groups, particularly with targets presented within reaching distance among visual clutter. Findings may carry implications for considering the adverse effects of visual clutter in developing and implementing environmental modifications to promote functional independence in Alzheimer's disease.

9.
Elife ; 122023 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-36790170

RESUMEN

The rodent visual system has attracted great interest in recent years due to its experimental tractability, but the fundamental mechanisms used by the mouse to represent the visual world remain unclear. In the primate, researchers have argued from both behavioral and neural evidence that a key step in visual representation is 'figure-ground segmentation', the delineation of figures as distinct from backgrounds. To determine if mice also show behavioral and neural signatures of figure-ground segmentation, we trained mice on a figure-ground segmentation task where figures were defined by gratings and naturalistic textures moving counterphase to the background. Unlike primates, mice were severely limited in their ability to segment figure from ground using the opponent motion cue, with segmentation behavior strongly dependent on the specific carrier pattern. Remarkably, when mice were forced to localize naturalistic patterns defined by opponent motion, they adopted a strategy of brute force memorization of texture patterns. In contrast, primates, including humans, macaques, and mouse lemurs, could readily segment figures independent of carrier pattern using the opponent motion cue. Consistent with mouse behavior, neural responses to the same stimuli recorded in mouse visual areas V1, RL, and LM also did not support texture-invariant segmentation of figures using opponent motion. Modeling revealed that the texture dependence of both the mouse's behavior and neural responses could be explained by a feedforward neural network lacking explicit segmentation capabilities. These findings reveal a fundamental limitation in the ability of mice to segment visual objects compared to primates.


Asunto(s)
Corteza Visual , Animales , Humanos , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Primates , Macaca , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa
10.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-36679545

RESUMEN

Object detection and tracking is one of the key applications of wireless sensor networks (WSNs). The key issues associated with this application include network lifetime, object detection and localization accuracy. To ensure the high quality of the service, there should be a trade-off between energy efficiency and detection accuracy, which is challenging in a resource-constrained WSN. Most researchers have enhanced the application lifetime while achieving target detection accuracy at the cost of high node density. They neither considered the system cost nor the object localization accuracy. Some researchers focused on object detection accuracy while achieving energy efficiency by limiting the detection to a predefined target trajectory. In particular, some researchers only focused on node clustering and node scheduling for energy efficiency. In this study, we proposed a mobile object detection and tracking framework named the Energy Efficient Object Detection and Tracking Framework (EEODTF) for heterogeneous WSNs, which minimizes energy consumption during tracking while not affecting the object detection and localization accuracy. It focuses on achieving energy efficiency via node optimization, mobile node trajectory optimization, node clustering, data reporting optimization and detection optimization. We compared the performance of the EEODTF with the Energy Efficient Tracking and Localization of Object (EETLO) model and the Particle-Swarm-Optimization-based Energy Efficient Target Tracking Model (PSOEETTM). It was found that the EEODTF is more energy efficient than the EETLO and PSOEETTM models.


Asunto(s)
Algoritmos , Tecnología Inalámbrica , Fenómenos Físicos , Análisis por Conglomerados , Proyectos de Investigación
11.
Biomed Eng Online ; 21(1): 91, 2022 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-36566183

RESUMEN

Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception.


Asunto(s)
Realidad Aumentada , Prótesis Visuales , Humanos , Percepción Visual , Visión Ocular , Reconocimiento en Psicología
12.
Comput Biol Med ; 150: 106067, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36150251

RESUMEN

BACKGROUND AND OBJECTIVE: Detection of the Optic Disc (OD) in retinal fundus image is crucial in identifying diverse abnormal conditions in the retina such as diabetic retinopathy. Previous systems are oriented to the OD detection and segmentation. Most research failed to locate the OD in the case when the image does not have a criterion appearance. The objective of the proposed work is to precisely define a new and robust OD segmentation in color retinal fundus images. METHODS: The proposed algorithm is composed of two stages: OD localization and segmentation. The first phase consists in the OD localization through: 1) a preprocessing step; 2) vessel extraction and elimination, and 3) a geometric analysis allowing to decide the OD location. For the second phase, a set of is computed in order to produce various candidates. A combination of these candidates accurately forms a completed contour of the OD. RESULTS: The proposed method is evaluated using 10 publicly available databases as well as a local database. Accuracy rates in the RimOne and IDRID databases are 98.06% and 99.71%, respectively, and 100% for the Chase, Drive, HRF, Drishti, Drions, Bin Rushed, Magrabia, Messidor and LocalDB databases with an overall success rate of 99.80% and specificity rates of 99.44%, 99.64%, 99.66%, 99.66%, 99.70%, 99.87%, 99.72%, 99.83% and 99.82% for the Rim One, Drions, IDRID, Drishti, HRF, Bin Rushed, Magrabia, Messidor and proprietary databases. CONCLUSION: The main advantage of the proposed approach is the robustness and the excellent performances even with critical cases of retinal images. The proposed method achieves the state-of-the-art performances with regards to the OD detection and segmentation. It is also of a great interest for clinical usage without the expert intervention to treat each image.


Asunto(s)
Retinopatía Diabética , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Fondo de Ojo , Retina/diagnóstico por imagen , Algoritmos , Retinopatía Diabética/diagnóstico por imagen
13.
J Imaging ; 8(7)2022 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-35877632

RESUMEN

Two-Dimensional (2D) object detection has been an intensely discussed and researched field of computer vision. With numerous advancements made in the field over the years, we still need to identify a robust approach to efficiently conduct classification and localization of objects in our environment by just using our mobile devices. Moreover, 2D object detection limits the overall understanding of the detected object and does not provide any additional information in terms of its size and position in the real world. This work proposes an object localization solution in Three-Dimension (3D) for mobile devices using a novel approach. The proposed method works by combining a 2D object detection Convolutional Neural Network (CNN) model with Augmented Reality (AR) technologies to recognize objects in the environment and determine their real-world coordinates. We leverage the in-built Simultaneous Localization and Mapping (SLAM) capability of Google's ARCore to detect planes and know the camera information for generating cuboid proposals from an object's 2D bounding box. The proposed method is fast and efficient for identifying everyday objects in real-world space and, unlike mobile offloading techniques, the method is well designed to work with limited resources of a mobile device.

14.
J Comput Biol ; 29(8): 932-941, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35862434

RESUMEN

The revolutionary technique cryoelectron tomography (cryo-ET) enables imaging of cellular structure and organization in a near-native environment at submolecular resolution, which is vital to subsequent data analysis and modeling. The conventional structure detection process first reconstructs the three-dimensional (3D) tomogram from a series of two-dimensional (2D) projections and then directly detects subcellular components found within the tomogram. However, this process is challenging due to potential structural information loss during the tomographic reconstruction and the limited scope of existing methods since most major state-of-the-art object detection methods are designed for 2D rather than 3D images. Therefore, in this article, as an alternative approach to complement the conventional process, we propose a novel 2D-to-3D framework that detects structures within 2D projection images before reconstructing the results back to 3D. We implemented the proposed framework as three specific algorithms for three individual tasks: semantic segmentation, edge detection, and object localization. As experimental validation of the 2D-to-3D framework for cryo-ET data, we applied the algorithms to the segmentation of mitochondrial calcium phosphate granules, detection of spherical edges, and localization of mitochondria. Quantitative and qualitative results show better performance for prediction tasks of segmentation on the 2D projections and promising performance on object localization and edge detection, paving the way for future studies in the exploration of cryo-ET for in situ structural biology.


Asunto(s)
Tomografía con Microscopio Electrónico , Procesamiento de Imagen Asistido por Computador , Algoritmos , Microscopía por Crioelectrón/métodos , Tomografía con Microscopio Electrónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
15.
Int J Mol Sci ; 23(15)2022 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-35897785

RESUMEN

Alzheimer's disease (AD) is a multifactorial pathology characterized by ß-amyloid (Aß) deposits, Tau hyperphosphorylation, neuroinflammatory response, and cognitive deficit. Changes in the bacterial gut microbiota (BGM) have been reported as a possible etiological factor of AD. We assessed in offspring (F1) 3xTg, the effect of BGM dysbiosisdysbiosis in mothers (F0) at gestation and F1 from lactation up to the age of 5 months on Aß and Tau levels in the hippocampus, as well as on spatial memory at the early symptomatic stage of AD. We found that BGM dysbiosisdysbiosis with antibiotics (Abx) treatment in F0 was vertically transferred to their F1 3xTg mice, as observed on postnatal day (PD) 30 and 150. On PD150, we observed a delay in spatial memory impairment and Aß deposits, but not in Tau and pTau protein in the hippocampus at the early symptomatic stage of AD. These effects are correlated with relative abundance of bacteria and alpha diversity, and are specific to bacterial consortia. Our results suggest that this specific BGM could reduce neuroinflammatory responses related to cerebral amyloidosis and cognitive deficit and activate metabolic pathways associated with the biosynthesis of triggering or protective molecules for AD.


Asunto(s)
Enfermedad de Alzheimer , Microbioma Gastrointestinal , Enfermedad de Alzheimer/metabolismo , Péptidos beta-Amiloides/metabolismo , Animales , Antibacterianos/farmacología , Antibacterianos/uso terapéutico , Modelos Animales de Enfermedad , Disbiosis/complicaciones , Disbiosis/tratamiento farmacológico , Femenino , Inflamación/complicaciones , Trastornos de la Memoria/complicaciones , Trastornos de la Memoria/etiología , Ratones , Ratones Transgénicos , Proteínas tau/metabolismo
16.
Sensors (Basel) ; 22(11)2022 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-35684861

RESUMEN

The development of a piezoresistive coating produced from dispersing graphene nanoplatelets (GNPs) inside a commercial water-based polyurethane paint is presented. The feasibility of its exploitation for realizing highly sensitive discrete strain sensors and to measure spatial strain distribution using linear and two-dimensional depositions was investigated. Firstly, the production process was optimized to achieve the best electromechanical response. The obtained materials were then subjected to different characterizations for structural and functional investigations. Morphological analyses showed a homogenous dispersion of GNPs within the host matrix and an average thickness of about 75 µm of the obtained nanostructured films. By several adhesion tests, it was demonstrated that the presence of the nanostructures inside the paint film lowered the adhesion strength by only 20% in respect to neat paint. Through electrical tests, the percolation curve of the nanomaterial was acquired, showing an effective electrical conductivity ranging from about 10-4 S/m to 3.5 S/m in relation to the different amounts of filler dispersed in the neat paint: in particular, samples with weight fractions of 2, 2.5, 3, 3.5, 4, 5 and 6 wt% of GNPs were produced and characterized. Next, the sensitivity to flexural strain of small piezoresistive sensors deposited by a spray-coating technique on a fiberglass-reinforced epoxy laminate beam was measured: a high gauge factor of 33 was obtained at a maximum strain of 1%. Thus, the sensitivity curve of the piezoresistive material was successively adopted to predict the strain along a multicontact painted strip on the same beam. Finally, for a painted laminate plate subjected to a mechanical flexural load, we demonstrated, through an electrical resistance tomography technique, the feasibility to map the electrical conductivity variations, which are strictly related to the induced strain/stress field. As a further example, we also showed the possibility of using the coating to detect the presence of conducting objects and damage.

17.
Phys Biol ; 19(4)2022 06 23.
Artículo en Inglés | MEDLINE | ID: mdl-35654026

RESUMEN

Weakly electric fish encode perturbations in a self-generated electric field to sense their environment. Localizing objects using this electric sense requires that distance be decoded from a two-dimensionalelectric imageof the field perturbations on their skin. Many studies of object localization by weakly electric fish, and by electric sensing in a generic context, have focused on extracting location information from different features of the electric image. Some of these studies have also considered the additional information gained from sampling the electric image at different times, and from different viewpoints. Here, we take a different perspective and instead consider the information available at asinglepoint in space (i.e. a single sensor or receptor) at a single point in time (i.e. constant field). By combining the information from multiple receptors, we show that an object's distance can be unambiguously encoded by as few as four receptors at specific locations on a sensing surface in a manner that is relatively robust to environmental noise. This provides a lower bound on the information (i.e. receptor array size) required to decode the three-dimensional location of an object using an electric sense.


Asunto(s)
Pez Eléctrico , Animales
18.
Sensors (Basel) ; 23(1)2022 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-36616974

RESUMEN

Vision is the main component of current robotics systems that is used for manipulating objects. However, solely relying on vision for hand-object pose tracking faces challenges such as occlusions and objects moving out of view during robotic manipulation. In this work, we show that object kinematics can be inferred from local haptic feedback at the robot-object contact points, combined with robot kinematics information given an initial vision estimate of the object pose. A planar, dual-arm, teleoperated robotic setup was built to manipulate an object with hands shaped like circular discs. The robot hands were built with rubber cladding to allow for rolling contact without slipping. During stable grasping by the dual arm robot, under quasi-static conditions, the surface of the robot hand and object at the contact interface is defined by local geometric constraints. This allows one to define a relation between object orientation and robot hand orientation. With rolling contact, the displacement of the contact point on the object surface and the hand surface must be equal and opposite. This information, coupled with robot kinematics, allows one to compute the displacement of the object from its initial location. The mathematical formulation of the geometric constraints between robot hand and object is detailed. This is followed by the methodology in acquiring data from experiments to compute object kinematics. The sensors used in the experiments, along with calibration procedures, are presented before computing the object kinematics from recorded haptic feedback. Results comparing object kinematics obtained purely from vision and from haptics are presented to validate our method, along with the future ideas for perception via haptic manipulation.


Asunto(s)
Tecnología Háptica , Robótica , Mano , Extremidad Superior , Retroalimentación
19.
Complex Intell Systems ; 8(3): 1929-1939, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34777962

RESUMEN

Bone age assessment using hand-wrist X-ray images is fundamental when diagnosing growth disorders of a child or providing a more patient-specific treatment. However, as clinical procedures are a subjective assessment, the accuracy depends highly on the doctor's experience. Motivated by this, a deep learning-based computer-aided diagnosis method was proposed for performing bone age assessment. Inspired by clinical approaches and aimed to reduce expensive manual annotations, informative regions localization based on a complete unsupervised learning method was firstly performed and an image-processing pipeline was proposed. Subsequently, an image model with pre-trained weights as a backbone was utilized to enhance the reliability of prediction. The prediction head was implemented by a Multiple Layer Perceptron with one hidden layer. In compliance with clinical studies, gender information was an additional input to the prediction head by embedded into the feature vector calculated from the backbone model. After the experimental comparison study, the best results showed a mean absolute error of 6.2 months on the public RSNA dataset and 5.1 months on the additional dataset using MobileNetV3 as the backbone.

20.
Bioinspir Biomim ; 17(1)2021 12 22.
Artículo en Inglés | MEDLINE | ID: mdl-34673547

RESUMEN

Parallax, as a visual effect, is used for depth perception of objects. But is there also the effect of parallax in the context of electric field imagery? In this work, the example of weakly electric fish is used to investigate how the self-generated electric field that these fish utilize for orientation and communication alike, may be used as a template to define electric parallax. The skin of the electric fish possesses a vast amount of electroreceptors that detect the self-emitted dipole-like electric field. In this work, the weakly electric fish is abstracted as an electric dipole with a sensor line in between the two emitters. With an analytical description of the object distortion for a uniform electric field, the distortion in a dipole-like field is simplified and simulated. On the basis of this simulation, the parallax effect could be demonstrated in electric field images i.e. by closer inspection of voltage profiles on the sensor line. Therefore, electric parallax can be defined as the relative movement of a signal feature of the voltage profile (here, the maximum or peak of the voltage profile) that travels along the sensor line peak trace (PT). The PT width correlates with the object's vertical distance to the sensor line, as close objects create a large PT and distant objects a small PT, comparable with the effect of visual motion parallax.


Asunto(s)
Pez Eléctrico , Percepción de Movimiento , Animales , Simulación por Computador , Órgano Eléctrico , Electricidad , Movimiento (Física) , Movimiento
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA