Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Imaging ; 10(7)2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39057740

RESUMEN

Knowledge of spectral sensitivity is important for high-precision comparison of images taken by different cameras and recognition of objects and interpretation of scenes for which color is an important cue. Direct estimation of quantum efficiency curves (QECs) is a complicated and tedious process requiring specialized equipment, and many camera manufacturers do not make spectral characteristics publicly available. This has led to the development of indirect techniques that are unreliable due to being highly sensitive to noise in the input data, and which often require the imposition of additional ad hoc conditions, some of which do not always hold. We demonstrate the reason for the lack of stability in the determination of QECs and propose an approach that guarantees the stability of QEC reconstruction, even in the presence of noise. A device for the realization of this approach is also proposed. The reported results were used as a basis for the granted US patent.

2.
Sensors (Basel) ; 23(19)2023 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-37836857

RESUMEN

This study is the first to develop technology to evaluate the object recognition performance of camera sensors, which are increasingly important in autonomous vehicles owing to their relatively low price, and to verify the efficiency of camera recognition algorithms in obstruction situations. To this end, the concentration and color of the blockage and the type and color of the object were set as major factors, with their effects on camera recognition performance analyzed using a camera simulator based on a virtual test drive toolkit. The results show that the blockage concentration has the largest impact on object recognition, followed in order by the object type, blockage color, and object color. As for the blockage color, black exhibited better recognition performance than gray and yellow. In addition, changes in the blockage color affected the recognition of object types, resulting in different responses to each object. Through this study, we propose a blockage-based camera recognition performance evaluation method using simulation, and we establish an algorithm evaluation environment for various manufacturers through an interface with an actual camera. By suggesting the necessity and timing of future camera lens cleaning, we provide manufacturers with technical measures to improve the cleaning timing and camera safety.

3.
Sensors (Basel) ; 23(6)2023 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-36992006

RESUMEN

High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great performance degradation in adversarial illumination environments such as on low-light roads or in underground spaces. To address this issue, in this paper, we first introduce an unsupervised learning approach to improve keypoint detection and description on monocular camera images. By emphasizing the consistency between feature points in the learning loss, visual features in dim environment can be better extracted. Second, to suppress the scale drift in monocular visual mapping, a robust loop-closure detection scheme is presented, which integrates both feature-point verification and multi-grained image similarity measurements. With experiments on public benchmarks, our keypoint detection approach is proven robust against varied illumination. With scenario tests including both underground and on-road driving, we demonstrate that our approach is able to reduce the scale drift in reconstructing the scene and achieve a mapping accuracy gain of up to 0.14 m in textureless or low-illumination environments.

4.
Sensors (Basel) ; 23(5)2023 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36904689

RESUMEN

We developed a mobile application for cervical rehabilitation that uses a non-invasive camera-based head-tracker sensor for monitoring neck movements. The intended user population should be able to use the mobile application in their own mobile device, but mobile devices have different camera sensors and screen dimensions that could affect the user performance and neck movement monitoring. In this work, we studied the influence of mobile devices type on camera-based monitoring of neck movements for rehabilitation purposes. We conducted an experiment to test whether the characteristics of a mobile device affect neck movements when using the mobile application with the head-tracker. The experiment consisted of the use of our application, containing an exergame, in three mobile devices. We used wireless inertial sensors to measure the real-time neck movements performed while using the different devices. The results showed that the effect of device type on neck movements was not statistically significant. We included the sex factor in the analysis, but there was no statistically significant interaction between sex and device variables. Our mobile application proved to be device-agnostic. This will allow intended users to use the mHealth application regardless of the type of device. Thus, future work can continue with the clinical evaluation of the developed application to analyse the hypothesis that the use of the exergame will improve therapeutic adherence in cervical rehabilitation.


Asunto(s)
Aplicaciones Móviles , Telemedicina , Computadoras de Mano
5.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-36679387

RESUMEN

The automobile industry has developed dramatically in recent years, the supply of vehicles has also increased, and thus it has become deeply established in everyday life. Recently, as the supply of vehicles with autonomous driving functions increases, the safety of vehicles is also an emerging issue. Various car-following models for the safe driving of vehicles have long been studied by various people, and recently, a Responsibility-Sensitive Safety (RSS) model has been proposed by Mobileye. However, in existing car-following models or the RSS model, the safe distance between vehicles is presented using only vehicle speed and acceleration information, so there is a limitation in that it cannot respond to changes in road conditions due to the weather. In this paper, in order to ensure safety when the RSS model is applied to a variable focus function camera, an improved RSS model is presented in consideration of the changes in road conditions due to changes in weather, and a safety distance is derived based on the proposed model.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Accidentes de Tránsito/prevención & control , Seguridad , Automóviles , Aceleración
6.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-36679537

RESUMEN

In smart cities, a large amount of optical camera equipment is deployed and used. Closed-circuit television (CCTV), unmanned aerial vehicles (UAVs), and smartphones are some examples of such equipment. However, additional information about these devices, such as 3D position, orientation information, and principal distance, is not provided. To solve this problem, the structured mobile mapping system point cloud was used in this study to investigate methods of estimating the principal point, position, and orientation of optical sensors without initial given values. The principal distance was calculated using two direct linear transformation (DLT) models and a perspective projection model. Methods for estimating position and orientation were discussed, and their stability was tested using real-world sensors. When the perspective projection model was used, the camera position and orientation were best estimated. The original DLT model had a significant error in the orientation estimation. The correlation between the DLT model parameters was thought to have influenced the estimation result. When the perspective projection model was used, the position and orientation errors were 0.80 m and 2.55°, respectively. However, when using a fixed-wing UAV, the estimated result was not properly produced owing to ground control point placement problems.


Asunto(s)
Teléfono Inteligente , Dispositivos Aéreos No Tripulados , Ciudades , Modelos Lineales
7.
Entropy (Basel) ; 24(8)2022 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-36010822

RESUMEN

Camera sensor identification can have numerous forensics and authentication applications. In this work, we follow an identification methodology for smartphone camera sensors using properties of the Dark Signal Nonuniformity (DSNU) in the collected images. This requires taking dark pictures, which the users can easily do by keeping the phone against their palm, and has already been proposed by various works. From such pictures, we extract low and mid frequency AC coefficients from the DCT (Discrete Cosine Transform) and classify the data with the help of machine learning techniques. Traditional algorithms such as KNN (K-Nearest Neighbor) give reasonable results in the classification, but we obtain the best results with a wide neural network, which, despite its simplicity, surpassed even a more complex network architecture that we tried. Our analysis showed that the blue channel provided the best separation, which is in contrast to previous works that have recommended the green channel for its higher encoding power.

8.
Sensors (Basel) ; 21(12)2021 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-34207851

RESUMEN

There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.


Asunto(s)
Algoritmos , Aprendizaje Automático , Automóviles , Rayos Láser
9.
Front Plant Sci ; 12: 469689, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33859655

RESUMEN

Stripe rust (Pst) is a major disease of wheat crops leading untreated to severe yield losses. The use of fungicides is often essential to control Pst when sudden outbreaks are imminent. Sensors capable of detecting Pst in wheat crops could optimize the use of fungicides and improve disease monitoring in high-throughput field phenotyping. Now, deep learning provides new tools for image recognition and may pave the way for new camera based sensors that can identify symptoms in early stages of a disease outbreak within the field. The aim of this study was to teach an image classifier to detect Pst symptoms in winter wheat canopies based on a deep residual neural network (ResNet). For this purpose, a large annotation database was created from images taken by a standard RGB camera that was mounted on a platform at a height of 2 m. Images were acquired while the platform was moved over a randomized field experiment with Pst-inoculated and Pst-free plots of winter wheat. The image classifier was trained with 224 × 224 px patches tiled from the original, unprocessed camera images. The image classifier was tested on different stages of the disease outbreak. At patch level the image classifier reached a total accuracy of 90%. To test the image classifier on image level, the image classifier was evaluated with a sliding window using a large striding length of 224 px allowing for fast test performance. At image level, the image classifier reached a total accuracy of 77%. Even in a stage with very low disease spreading (0.5%) at the very beginning of the Pst outbreak, a detection accuracy of 57% was obtained. Still in the initial phase of the Pst outbreak with 2 to 4% of Pst disease spreading, detection accuracy with 76% could be attained. With further optimizations, the image classifier could be implemented in embedded systems and deployed on drones, vehicles or scanning systems for fast mapping of Pst outbreaks.

10.
Sensors (Basel) ; 20(22)2020 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-33202653

RESUMEN

Perception of road structures especially the traffic intersections by visual sensors is an essential task for automated driving. However, compared with intersection detection or visual place recognition, intersection re-identification (intersection re-ID) strongly affects driving behavior decisions with given routes, yet has long been neglected by researchers. This paper strives to explore intersection re-ID by a monocular camera sensor. We propose a Hybrid Double-Level re-identification approach which exploits two branches of Deep Convolutional Neural Network to accomplish multi-task including classification of intersection and its fine attributes, and global localization in topological maps. Furthermore, we propose a mixed loss training for the network to learn the similarity of two intersection images. As no public datasets are available for the intersection re-ID task, based on the work of RobotCar, we propose a new dataset with carefully-labeled intersection attributes, which is called "RobotCar Intersection" and covers more than 30,000 images of eight intersections in different seasons and day time. Additionally, we provide another dataset, called "Campus Intersection" consisting of panoramic images of eight intersections in a university campus to verify our updating strategy of topology map. Experimental results demonstrate that our proposed approach can achieve promising results in re-ID of both coarse road intersections and its global pose, and is well suited for updating and completion of topological maps.

11.
Sensors (Basel) ; 20(21)2020 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-33147784

RESUMEN

The main source of delays in public transport systems (buses, trams, metros, railways) takes place in their stations. For example, a public transport vehicle can travel at 60 km per hour between stations, but its commercial speed (average en-route speed, including any intermediate delay) does not reach more than half of that value. Therefore, the problem that public transport operators must solve is how to reduce the delay in stations. From the perspective of transport engineering, there are several ways to approach this issue, from the design of infrastructure and vehicles to passenger traffic management. The tools normally available to traffic engineers are analytical models, microscopic traffic simulation, and, ultimately, real-scale laboratory experiments. In any case, the data that are required are number of passengers that get on and off from the vehicles, as well as the number of passengers waiting on platforms. Traditionally, such data has been collected manually by field counts or through videos that are then processed by hand. On the other hand, public transport networks, specially metropolitan railways, have an extensive monitoring infrastructure based on standard video cameras. Traditionally, these are observed manually or with very basic signal processing support, so there is significant scope for improving data capture and for automating the analysis of site usage, safety, and surveillance. This article shows a way of collecting and analyzing the data needed to feed both traffic models and analyze laboratory experimentation, exploiting recent intelligent sensing approaches. The paper presents a new public video dataset gathered using real-scale laboratory recordings. Part of this dataset has been annotated by hand, marking up head locations to provide a ground-truth on which to train and evaluate deep learning detection and tracking algorithms. Tracking outputs are then used to count people getting on and off, achieving a mean accuracy of 92% with less than 0.15% standard deviation on 322 mostly unseen dataset video sequences.


Asunto(s)
Algoritmos , Vehículos a Motor , Procesamiento de Señales Asistido por Computador , Transportes , Grabación en Video , Humanos
12.
Sensors (Basel) ; 20(15)2020 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-32751864

RESUMEN

A novel method is described for evaluating the colorimetric accuracy of digital color cameras based on a new measure of the metamer mismatch body (MMB) that is induced by the change from the camera as an 'observer' to the human standard observer. In comparison to the majority of existing methods for evaluating colorimetric accuracy, the advantage of using the MMB is that it is based on the theory of metamer mismatching and, therefore, shows how much color error can arise in principle. A new measure of colorimetric accuracy based on the shape of the camera-induced MMB is proposed and tested. MMB shape is measured in terms of the moments of inertia of the MMB treated as a mass of uniform density. Since colorimetric accuracy is independent of any linear transformation of the sensor space, the MMB measure needs to be as well. Normalization by the moments of inertia of the object color solid is introduced to provide this independence.

13.
Sensors (Basel) ; 20(13)2020 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-32630350

RESUMEN

For a safe market launch of automated vehicles, the risks of the overall system as well as the sub-components must be efficiently identified and evaluated. This also includes camera-based object detection using artificial intelligence algorithms. It is trivial and explainable that due to the principle of the camera, performance depends highly on the environmental conditions and can be poor, for example in heavy fog. However, there are other factors influencing the performance of camera-based object detection, which will be comprehensively investigated for the first time in this paper. Furthermore, a precise modeling of the detection performance and the explanation of individual detection results is not possible due to the artificial intelligence based algorithms used. Therefore, a modeling approach based on the investigated influence factors is proposed and the newly developed SHapley Additive exPlanations (SHAP) approach is adopted to analyze and explain the detection performance of different object detection algorithms. The results show that many influence factors such as the relative rotation of an object towards the camera or the position of an object on the image have basically the same influence on the detection performance regardless of the detection algorithm used. In particular, the revealed weaknesses of the tested object detectors can be used to derive challenging and critical scenarios for the testing and type approval of automated vehicles.

14.
Sensors (Basel) ; 20(7)2020 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-32290174

RESUMEN

This paper proposes a classifier designed for human facial feature annotation, which is capable of running on relatively cheap, low power consumption autonomous microcomputer systems. An autonomous system is one that depends only on locally available hardware and software-for example, it does not use remote services available through the Internet. The proposed solution, which consists of a Histogram of Oriented Gradients (HOG) face detector and a set of neural networks, has comparable average accuracy and average true positive and true negative ratio to state-of-the-art deep neural network (DNN) architectures. However, contrary to DNNs, it is possible to easily implement the proposed method in a microcomputer with very limited RAM memory and without the use of additional coprocessors. The proposed method was trained and evaluated on a large 200,000 image face data set and compared with results obtained by other researchers. Further evaluation proves that it is possible to perform facial image attribute classification using the proposed algorithm on incoming video data captured by an RGB camera sensor of the microcomputer. The obtained results can be easily reproduced, as both the data set and source code can be downloaded. Developing and evaluating the proposed facial image annotation algorithm and its implementation, which is easily portable between various hardware and operating systems (virtually the same code works both on high-end PCs and microcomputers using the Windows and Linux platforms) and which is dedicated for low power consumption devices without coprocessors, is the main and novel contribution of this research.

15.
Sensors (Basel) ; 20(5)2020 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-32164292

RESUMEN

Cell motility is the brilliant result of cell status and its interaction with close environments. Its detection is now possible, thanks to the synergy of high-resolution camera sensors, time-lapse microscopy devices, and dedicated software tools for video and data analysis. In this scenario, we formulated a novel paradigm in which we considered the individual cells as a sort of sensitive element of a sensor, which exploits the camera as a transducer returning the movement of the cell as an output signal. In this way, cell movement allows us to retrieve information about the chemical composition of the close environment. To optimally exploit this information, in this work, we introduce a new setting, in which a cell trajectory is divided into sub-tracks, each one characterized by a specific motion kind. Hence, we considered all the sub-tracks of the single-cell trajectory as the signals of a virtual array of cell motility-based sensors. The kinematics of each sub-track is quantified and used for a classification task. To investigate the potential of the proposed approach, we have compared the achieved performances with those obtained by using a single-trajectory paradigm with the scope to evaluate the chemotherapy treatment effects on prostate cancer cells. Novel pattern recognition algorithms have been applied to the descriptors extracted at a sub-track level by implementing features, as well as samples selection (a good teacher learning approach) for model construction. The experimental results have put in evidence that the performances are higher when a further cluster majority role has been considered, by emulating a sort of sensor fusion procedure. All of these results highlighted the high strength of the proposed approach, and straightforwardly prefigure its use in lab-on-chip or organ-on-chip applications, where the cell motility analysis can be massively applied using time-lapse microscopy images.


Asunto(s)
Antineoplásicos/farmacología , Ensayos de Selección de Medicamentos Antitumorales , Próstata/efectos de los fármacos , Neoplasias de la Próstata/tratamiento farmacológico , Algoritmos , Fenómenos Biomecánicos , Movimiento Celular , Análisis por Conglomerados , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Masculino , Microscopía , Modelos Estadísticos , Distribución Normal , Células PC-3 , Reconocimiento de Normas Patrones Automatizadas , Programas Informáticos , Grabación en Video
16.
Sensors (Basel) ; 19(16)2019 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-31426511

RESUMEN

The determination of daily concentrations of atmospheric pollen is important in the medical and biological fields. Obtaining pollen concentrations is a complex and time-consuming task for specialized personnel. The automatic location of pollen grains is a handicap due to the high complexity of the images to be processed, with polymorphic and clumped pollen grains, dust, or debris. The purpose of this study is to analyze the feasibility of implementing a reliable pollen grain detection system based on a convolutional neural network architecture, which will be used later as a critical part of an automated pollen concentration estimation system. We used a training set of 251 videos to train our system. As the videos record the process of focusing the samples, this system makes use of the 3D information presented by several focal planes. Besides, a separate set of 135 videos (containing 1234 pollen grains of 11 pollen types) was used to evaluate detection performance. The results are promising in detection (98.54% of recall and 99.75% of precision) and location accuracy (0.89 IoU as the average value). These results suggest that this technique can provide a reliable basis for the development of an automated pollen counting system.


Asunto(s)
Aprendizaje Profundo , Microscopía/métodos , Polen/química , Reproducibilidad de los Resultados , Grabación de Cinta de Video
17.
Sensors (Basel) ; 19(2)2019 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-30669531

RESUMEN

Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods.


Asunto(s)
Seguridad Computacional , Reconocimiento Facial , Luz , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/instrumentación , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Factores de Tiempo
18.
Sensors (Basel) ; 19(2)2019 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-30642014

RESUMEN

Detection and classification of road markings are a prerequisite for operating autonomous vehicles. Although most studies have focused on the detection of road lane markings, the detection and classification of other road markings, such as arrows and bike markings, have not received much attention. Therefore, we propose a detection and classification method for various types of arrow markings and bike markings on the road in various complex environments using a one-stage deep convolutional neural network (CNN), called RetinaNet. We tested the proposed method in complex road scenarios with three open datasets captured by visible light camera sensors, namely the Malaga urban dataset, the Cambridge dataset, and the Daimler dataset on both a desktop computer and an NVIDIA Jetson TX2 embedded system. Experimental results obtained using the three open databases showed that the proposed RetinaNet-based method outperformed other methods for detection and classification of road markings in terms of both accuracy and processing time.

19.
Environ Sci Pollut Res Int ; 26(3): 2722-2733, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30484049

RESUMEN

Water environment monitoring is of great importance to human health, ecosystem sustainability, and water transport. Unlike traditional water quality monitoring problems, this paper focuses on visual perception of water environment. We first introduce the development of a customized aquatic sensor node equipped with an embedded camera sensor. Based on this platform, we present an efficient and holistic contamination detection approach, which can automatically adapt to the detection of floating debris in dynamic waters or the identification of salient regions in static waters. Our approach is specifically designed based on compressed sensing theory to give full consideration to the unique challenges in water environment and the resource constraints on sensor nodes. Both laboratory and field experiments demonstrate the proposed method can fast and accurately detect various types of water pollutants and is a better choice for camera sensor-based water environment monitoring compared with other methods.


Asunto(s)
Monitoreo del Ambiente/instrumentación , Fotograbar/instrumentación , Contaminantes Químicos del Agua/análisis , Ecosistema , Monitoreo del Ambiente/métodos , Calidad del Agua
20.
Sensors (Basel) ; 18(8)2018 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-30096832

RESUMEN

Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.


Asunto(s)
Aprendizaje Profundo , Rayos Infrarrojos , Iris/anatomía & histología , Fotograbar/instrumentación , Humanos , Redes Neurales de la Computación , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA