Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Accid Anal Prev ; 206: 107692, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39033584

RESUMEN

Vehicles equipped with automated driving capabilities have shown potential to improve safety and operations. Advanced driver assistance systems (ADAS) and automated driving systems (ADS) have been widely developed to support vehicular automation. Although the studies on the injury severity outcomes that involve automated vehicles are ongoing, there is limited research investigating the difference between injury severity outcomes for the ADAS and ADS equipped vehicles. To ensure a comprehensive analysis, a multi-source dataset that includes 1,001 ADAS crashes (SAE Level 2 vehicles) and 548 ADS crashes (SAE Level 4 vehicles) is used. Two random parameters multinomial logit models with heterogeneity in the means of random parameters are considered to gain a better understanding of the variables impacting the crash injury severity outcomes for the ADAS (SAE Level 2) and ADS (SAE Level 4) vehicles. It was found that while 67 percent of crashes involving the ADAS equipped vehicles in the dataset took place on a highway, 94 percent of crashes involving ADS took place in more urban settings. The model estimation results also reveal that the weather indicator, driver type indicator, differences in the system sophistication that are captured by both manufacture year and high/low mileage as well as rear and front contact indicators all play a role in the crash injury severity outcomes. The results offer an exploratory assessment of safety performance of the ADAS and ADS equipped vehicles using the real-world data and can be used by the manufacturers and other stakeholders to dictate the direction of their deployment and usage.


Asunto(s)
Accidentes de Tránsito , Automatización , Conducción de Automóvil , Heridas y Lesiones , Humanos , Accidentes de Tránsito/estadística & datos numéricos , Conducción de Automóvil/estadística & datos numéricos , Automóviles , Modelos Logísticos , Tiempo (Meteorología) , Puntaje de Gravedad del Traumatismo , Índices de Gravedad del Trauma
2.
Sensors (Basel) ; 24(10)2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38793952

RESUMEN

The convergence of edge computing systems with Field-Programmable Gate Array (FPGA) technology has shown considerable promise in enhancing real-time applications across various domains. This paper presents an innovative edge computing system design specifically tailored for pavement defect detection within the Advanced Driver-Assistance Systems (ADASs) domain. The system seamlessly integrates the AMD Xilinx AI platform into a customized circuit configuration, capitalizing on its capabilities. Utilizing cameras as input sensors to capture road scenes, the system employs a Deep Learning Processing Unit (DPU) to execute the YOLOv3 model, enabling the identification of three distinct types of pavement defects with high accuracy and efficiency. Following defect detection, the system efficiently transmits detailed information about the type and location of detected defects via the Controller Area Network (CAN) interface. This integration of FPGA-based edge computing not only enhances the speed and accuracy of defect detection, but also facilitates real-time communication between the vehicle's onboard controller and external systems. Moreover, the successful integration of the proposed system transforms ADAS into a sophisticated edge computing device, empowering the vehicle's onboard controller to make informed decisions in real time. These decisions are aimed at enhancing the overall driving experience by improving safety and performance metrics. The synergy between edge computing and FPGA technology not only advances ADAS capabilities, but also paves the way for future innovations in automotive safety and assistance systems.

3.
Accid Anal Prev ; 203: 107621, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38729056

RESUMEN

The emerging connected vehicle (CV) technologies facilitate the development of integrated advanced driver assistance systems (ADASs), with which various functions are coordinated in a comprehensive framework. However, challenges arise in enabling drivers to perceive important information with minimal distractions when multiple messages are simultaneously provided by integrated ADASs. To this end, this study introduces three types of human-machine interfaces (HMIs) for an integrated ADAS: 1) three messages using a visual display only, 2) four messages using a visual display only, and 3) three messages using visual plus auditory displays. Meanwhile, the differences in driving performance across three HMI types are examined to investigate the impacts of information quantity and display formats on driving behaviors. Additionally, variations in drivers' responses to the three HMI types are examined. Driving behaviors of 51 drivers with respect to three HMI types are investigated in eight field testing scenarios. These scenarios include warnings for rear-end collision, lateral collision, forward collision, lane-change, and curve speed, as well as notifications for emergency events downstream, the specified speed limit, and car-following behaviors. Results indicate that, compared to a visual display only, presenting three messages through visual and auditory displays enhances driving performance in four typical scenarios. Compared to the presentation of three messages, a visual display offering four messages improves driving performance in rear-end collision warning scenarios but diminishes the performance in lane-change scenarios. Additionally, the relationship between information quantity and display formats shown on HMIs and driving performance can be moderated by drivers' gender, occupation, driving experience, annual driving distance, and safety attitudes. Findings are indicative to designers in automotive industries in developing HMIs for future CVs.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Conducción de Automóvil/psicología , Masculino , Femenino , Adulto , Accidentes de Tránsito/prevención & control , Adulto Joven , Interfaz Usuario-Computador , Sistemas Hombre-Máquina , Automóviles , Persona de Mediana Edad , Presentación de Datos
4.
Sensors (Basel) ; 24(7)2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38610538

RESUMEN

Safe autonomous vehicle (AV) operations depend on an accurate perception of the driving environment, which necessitates the use of a variety of sensors. Computational algorithms must then process all of this sensor data, which typically results in a high on-vehicle computational load. For example, existing lane markings are designed for human drivers, can fade over time, and can be contradictory in construction zones, which require specialized sensing and computational processing in an AV. But, this standard process can be avoided if the lane information is simply transmitted directly to the AV. High definition maps and road side units (RSUs) can be used for direct data transmission to the AV, but can be prohibitively expensive to establish and maintain. Additionally, to ensure robust and safe AV operations, more redundancy is beneficial. A cost-effective and passive solution is essential to address this need effectively. In this research, we propose a new infrastructure information source (IIS), chip-enabled raised pavement markers (CERPMs), which provide environmental data to the AV while also decreasing the AV compute load and the associated increase in vehicle energy use. CERPMs are installed in place of traditional ubiquitous raised pavement markers along road lane lines to transmit geospatial information along with the speed limit using long range wide area network (LoRaWAN) protocol directly to nearby vehicles. This information is then compared to the Mobileye commercial off-the-shelf traditional system that uses computer vision processing of lane markings. Our perception subsystem processes the raw data from both CEPRMs and Mobileye to generate a viable path required for a lane centering (LC) application. To evaluate the detection performance of both systems, we consider three test routes with varying conditions. Our results show that the Mobileye system failed to detect lane markings when the road curvature exceeded ±0.016 m-1. For the steep curvature test scenario, it could only detect lane markings on both sides of the road for just 6.7% of the given test route. On the other hand, the CERPMs transmit the programmed geospatial information to the perception subsystem on the vehicle to generate a reference trajectory required for vehicle control. The CERPMs successfully generated the reference trajectory for vehicle control in all test scenarios. Moreover, the CERPMs can be detected up to 340 m from the vehicle's position. Our overall conclusion is that CERPM technology is viable and that it has the potential to address the operational robustness and energy efficiency concerns plaguing the current generation of AVs.

5.
Accid Anal Prev ; 202: 107599, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38669900

RESUMEN

PURPOSE: We examined collision warning systems with different modalities and timing thresholds, assessing their impact on responses to pedestrian hazards by drivers with impaired contrast sensitivity (ICS). METHODS: Seventeen ICS (70-84 y, median CS 1.35 log units) and 17 normal vision (NV: 68-73 y, median CS 1.95) participants completed 6 city drives in a simulator with 3 bimodal warnings: visual-auditory, visual-directional-tactile, and visual-non-directional-tactile. Each modality had one drive with early and one with late warnings, triggered at 3.5 s and 2 s time-to-collision, respectively. RESULTS: ICS participants triggered more early (43 vs 37 %) and late warnings (12 vs 6 %) than NV participants and had more collisions (3 vs 0 %). Early warnings reduced time to fixate hazards (late 1.9 vs early 1.2 s, p < 0.001), brake response times (2.8 vs 1.8 s, p < 0.001) and collision rates (1.2 vs 0.02 %). With late warnings, ICS participants took 0.7 s longer to brake than NV (p < 0.001) and had an 11 % collision rate (vs 0.7 % with early warnings). Non-directional-tactile warnings yielded the lowest collision rates for ICS participants (4 vs auditory 12 vs directional-tactile 15.2 %) in late warning scenarios. All ICS participants preferred early warnings. CONCLUSIONS: While early warnings improved hazard responses and reduced collisions for ICS participants, late warnings did not, resulting in high collision rates. In contrast, both early and late warnings were helpful for NV drivers. Non-directional-tactile warnings were the most effective in reducing collisions. The findings provide insights relevant to the development of hazard warnings tailored for drivers with impaired vision.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Sensibilidad de Contraste , Tiempo de Reacción , Humanos , Anciano , Masculino , Femenino , Anciano de 80 o más Años , Accidentes de Tránsito/prevención & control , Simulación por Computador , Trastornos de la Visión , Estudios de Casos y Controles , Equipos de Seguridad , Factores de Tiempo
6.
Sensors (Basel) ; 24(2)2024 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-38257575

RESUMEN

Line-of-sight (LOS) sensors developed in newer vehicles have the potential to help avoid crash and near-crash scenarios with advanced driving-assistance systems; furthermore, connected vehicle technologies (CVT) also have a promising role in advancing vehicle safety. This study used crash and near-crash events from the Second Strategic Highway Research Program Naturalistic Driving Study (SHRP2 NDS) to reconstruct crash events so that the applicable benefit of sensors in LOS systems and CVT can be compared. The benefits of CVT over LOS systems include additional reaction time before a predicted crash, as well as a lower deceleration value needed to prevent a crash. This work acts as a baseline effort to determine the potential safety benefits of CVT-enabled systems over LOS sensors alone.

7.
Sensors (Basel) ; 23(23)2023 Nov 26.
Artículo en Inglés | MEDLINE | ID: mdl-38067798

RESUMEN

Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object's velocity and direction of motion in the sensor's field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s-1 and a two-sigma confidence interval of [-0.0008 m s-1, 0.0017 m s-1] for the axis-wise estimation of an object's relative velocity, and an RMSE of 0.0815 m s-1 and a two-sigma confidence interval of [0.0138 m s-1, 0.0170 m s-1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.

8.
Sensors (Basel) ; 23(21)2023 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-37960441

RESUMEN

Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are being proposed, there is still room for improvement to develop better and more accurate drowsiness detection systems using behavioral features such as mouth and eye movement. This study proposes a deep neural network architecture for drowsiness detection employing a convolutional neural network (CNN) for driver drowsiness detection. Experiments involve using the DLIB library to locate key facial points to calculate the mouth aspect ratio (MAR). To compensate for the small dataset, data augmentation is performed for the 'yawning' and 'no_yawning' classes. Models are trained and tested involving the original and augmented dataset to analyze the impact on model performance. Experimental results demonstrate that the proposed CNN model achieves an average accuracy of 96.69%. Performance comparison with existing state-of-the-art approaches shows better performance of the proposed model.


Asunto(s)
Conducción de Automóvil , Redes Neurales de la Computación , Vigilia , Accidentes de Tránsito/prevención & control , Movimientos Oculares
9.
Front Neurol ; 14: 1225751, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37900602

RESUMEN

Introduction: PD is a progressive neurodegenerative disorder that affects, according to the ICF, body systems (cognitive, visual, and motor), and functions (e.g., decreased executive functions, decreased visual acuity, impaired contrast sensitivity, decreased coordination)-all which impact driving performance, an instrumental activity of daily living in the domain of "Activity" and "Participation" according to the ICF. Although there is strong evidence of impaired driving performance in PD, few studies have explored the real-world benefits of in-vehicle automation technologies, such as in-vehicle information systems (IVIS) and advanced driver assistance systems (ADAS), for drivers with PD. These technologies hold potential to alleviate driving impairments, reduce errors, and improve overall performance, allowing individuals with PD to maintain their mobility and independence more safely and for longer periods. This preliminary study aimed to fill the gap in the literature by examining the impact of IVIS and ADAS on driving safety, as indicated by the number of driving errors made by people with PD in an on-road study. Methods: Forty-five adults with diagnosed PD drove a 2019 Toyota Camry equipped with IVIS and ADAS features (Toyota Safety Sense 2.0) on a route containing highway and suburban roads. Participants drove half of the route with the IVIS and ADAS systems activated and the other half with the systems deactivated. Results: The results suggest that systems that assume control of the driving task, such as adaptive cruise control, were most effective in reducing driving errors. Furthermore, individual differences in cognitive abilities, particularly memory, were significantly correlated with the total number of driving errors when the systems were deactivated, but no significant correlations were present when the systems were activated. Physical capability factors, such as rigidity and bradykinesia, were not significantly correlated with driving error. Discussion: Taken together, these results show that in-vehicle driver automation systems can benefit drivers with PD and diminish the impact of individual differences in driver cognitive ability.

10.
Accid Anal Prev ; 191: 107195, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37441985

RESUMEN

Driving simulator studies are popular means to investigate driving behaviour in a controlled environment and test safety-critical events that would otherwise not be possible in real-world driving conditions. While several factors affect driving performance, driving distraction has been emphasised as a safety-critical issue across the globe. In this context, this study explores the impact of distraction imposed by mobile phone usage, i.e., writing and reading text messages, on driver behaviour. As part of the greater i-DREAMS project, this study uses a car driving simulator experimental design in Germany to investigate driver behaviour under various conditions: (I) monitoring scenario representing normal driving conditions, (II) intervention scenario in which drivers receive fixed timing in-vehicle intervention in case of unsafe driving manoeuvres, and (III) distraction scenario in which drivers receive in-vehicle interventions based on task completion capability, where mobile phone distraction is imposed. Besides, eye-tracking glasses are used to further explore drivers' attention allocation and eye movement behaviour. This research focuses on driver response to risky traffic events (i.e., potential pedestrian collisions, and tailgating) and the impact of distraction on driving performance, by analysing a set of eye movement and driving performance measures of 58 participants. The results reveal a significant change in drivers' gaze patterns during the distraction drives with significantly higher gaze points towards the i-DREAMS intervention display (the utilised advanced driver assistance systems in this study). The overall statistical analysis of driving performance measures suggests nearly similar impacts on driver behaviour during distraction drives; a higher deviation of lateral positioning was noted irrespective of the event risk levels and lower longitudinal acceleration rates were observed for pedestrian collisions and non-critical events during distracted driving.


Asunto(s)
Conducción de Automóvil , Teléfono Celular , Conducción Distraída , Envío de Mensajes de Texto , Humanos , Conducción Distraída/prevención & control , Accidentes de Tránsito/prevención & control , Movimientos Oculares
11.
Accid Anal Prev ; 190: 107130, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37336048

RESUMEN

Advanced Driver Assistance Systems (ADAS) support drivers with some driving tasks. However, drivers may lack appropriate knowledge about ADAS resulting in inadequate mental models. This may result in drivers misusing ADAS, or mistrusting the technologies, especially after encountering edge-case events (situations beyond the capability of an ADAS where the system may malfunction or fail) and may also adversely affect driver workload. Literature suggests mental models could be improved through exposure to ADAS-related driving situations, especially those related to ADAS capabilities and limitations. The objective of this study was to examine the impact of frequency and quality of exposure on drivers' understanding of Adaptive Cruise Control (ACC), their trust, and their workload after driving with ACC. Sixteen novice ACC users were recruited for this longitudinal driving simulator study. Drivers were randomly assigned to one of two groups - the 'Regular Exposure' group encountering 'routine' edge-case events, and the 'Enhanced Exposure' group encountering 'routine' and 'rare' events. Each participant undertook four different simulator sessions, each separated by about a week. Each session comprised a simulator drive featuring five edge-case scenarios. The study followed a mixed-subject design, with exposure frequency as the within-subject variable, and quality of exposure (defined by two groups) as the between-subject variable. Surveys measured drivers' trust, workload, and mental models. The results from the analyses using linear regression models revealed that drivers' mental models about ACC improve with frequency of exposure to ACC and associated edge-case driving situations. This was more the case for drivers who experienced 'rare' ACC edge cases. The findings also indicate that for those who encountered 'rare' edge cases, workload was higher and trust was lower than those who did not. These findings are significant since they underline the importance of experience and familiarity with ADAS for safe operation. While these findings indicate that drivers benefit from increased exposure to ACC and edge cases in terms of appropriate use of ADAS, and ultimately promise crash reductions and injury prevention, a challenge remains regarding how to provide drivers with appropriate exposure in a safe manner.


Asunto(s)
Conducción de Automóvil , Humanos , Accidentes de Tránsito/prevención & control , Equipos de Seguridad , Confianza , Carga de Trabajo
12.
Data Brief ; 48: 109146, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37128585

RESUMEN

Accurate perception and awareness of the environment surrounding the automobile is a challenge in automotive research. This article presents A3CarScene, a dataset recorded while driving a research vehicle equipped with audio and video sensors on public roads in the Marche Region, Italy. The sensor suite includes eight microphones installed inside and outside the passenger compartment and two dashcams mounted on the front and rear windows. Approximately 31 h of data for each device were collected during October and November 2022 by driving about 1500 km along diverse roads and landscapes, in variable weather conditions, in daytime and nighttime hours. All key information for the scene understanding process of automated vehicles has been accurately annotated. For each route, annotations with beginning and end timestamps report the type of road traveled (motorway, trunk, primary, secondary, tertiary, residential, and service roads), the degree of urbanization of the area (city, town, suburban area, village, exurban and rural areas), the weather conditions (clear, cloudy, overcast, and rainy), the level of lighting (daytime, evening, night, and tunnel), the type (asphalt or cobblestones) and moisture status (dry or wet) of the road pavement, and the state of the windows (open or closed). This large-scale dataset is valuable for developing new driving assistance technologies based on audio or video data alone or in a multimodal manner and for improving the performance of systems currently in use. The data acquisition process with sensors in multiple locations allows for the assessment of the best installation placement concerning the task. Deep learning engineers can use this dataset to build new baselines, as a comparative benchmark, and to extend existing databases for autonomous driving.

13.
Digit Health ; 9: 20552076231174782, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37188078

RESUMEN

Background: Level 3 automated driving systems involve the continuous performance of the driving task by artificial intelligence within set environmental conditions, such as a straight highway. The driver's role in Level 3 is to resume responsibility of the driving task in response to any departure from these conditions. As automation increases, a driver's attention may divert towards non-driving-related tasks (NDRTs), making transitions of control between the system and user more challenging. Safety features such as physiological monitoring thus become important with increasing vehicle automation. However, to date there has been no attempt to synthesise the evidence for the effect of NDRT engagement on drivers' physiological responses in Level 3 automation. Methods: A comprehensive search of the electronic databases MEDLINE, EMBASE, Web of Science, PsycINFO, and IEEE Explore will be conducted. Empirical studies assessing the effect of NDRT engagement on at least one physiological parameter during Level 3 automation, in comparison with a control group or baseline condition will be included. Screening will take place in two stages, and the process will be outlined within a PRISMA flow diagram. Relevant physiological data will be extracted from studies and analysed using a series of meta-analyses by outcome. A risk of bias assessment will also be completed on the sample. Conclusion: This review will be the first to appraise the evidence for the physiological effect of NDRT engagement during Level 3 automation, and will have implications for future empirical research and the development of driver state monitoring systems.

14.
Sensors (Basel) ; 22(19)2022 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-36236286

RESUMEN

The United States has over three trillion vehicle miles of travel annually on over four million miles of public roadways, which require regular maintenance. To maintain and improve these facilities, agencies often temporarily close lanes, reconfigure lane geometry, or completely close the road depending on the scope of the construction project. Lane widths of less than 11 feet in construction zones can impact highway capacity and crash rates. Crash data can be used to identify locations where the road geometry could be improved. However, this is a manual process that does not scale well. This paper describes findings for using data from onboard sensors in production vehicles for measuring lane widths. Over 200 miles of roadway on US-52, US-41, and I-65 in Indiana were measured using vehicle sensor data and compared with mobile LiDAR point clouds as ground truth and had a root mean square error of approximately 0.24 feet. The novelty of these results is that vehicle sensors can identify when work zones use lane widths substantially narrower than the 11 foot standard at a network level and can be used to aid in the inspection and verification of construction specification conformity. This information would contribute to the construction inspection performed by agencies in a safer, more efficient way.


Asunto(s)
Accidentes de Tránsito , Planificación Ambiental , Seguridad , Viaje , Estados Unidos
15.
Sensors (Basel) ; 22(19)2022 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-36236484

RESUMEN

This paper proposes a deep learning based object detection method to locate a distant region in an image in real-time. It concentrates on distant objects from a vehicular front camcorder perspective, trying to solve one of the common problems in Advanced Driver Assistance Systems (ADAS) applications, which is, to detect the smaller and faraway objects with the same confidence as those with the bigger and closer objects. This paper presents an efficient multi-scale object detection network, termed as ConcentrateNet to detect a vanishing point and concentrate on the near-distant region. Initially, the object detection model inferencing will produce a larger scale of receptive field detection results and predict a potentially vanishing point location, that is, the farthest location in the frame. Then, the image is cropped near the vanishing point location and processed with the object detection model for second inferencing to obtain distant object detection results. Finally, the two-inferencing results are merged with a specific Non-Maximum Suppression (NMS) method. The proposed network architecture can be employed in most of the object detection models as the proposed model is implemented in some of the state-of-the-art object detection models to check feasibility. Compared with original models using higher resolution input size, ConcentrateNet architecture models use lower resolution input size, with less model complexity, achieving significant precision and recall improvements. Moreover, the proposed ConcentrateNet architecture model is successfully ported onto a low-powered embedded system, NVIDIA Jetson AGX Xavier, suiting the real-time autonomous machines.


Asunto(s)
Conducción de Automóvil , Redes Neurales de la Computación , Enfermedad Crónica , Recolección de Datos , Humanos
16.
Sensors (Basel) ; 22(19)2022 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-36236655

RESUMEN

This work introduces a process to develop a tool-independent, high-fidelity, ray tracing-based light detection and ranging (LiDAR) model. This virtual LiDAR sensor includes accurate modeling of the scan pattern and a complete signal processing toolchain of a LiDAR sensor. It is developed as a functional mock-up unit (FMU) by using the standardized open simulation interface (OSI) 3.0.2, and functional mock-up interface (FMI) 2.0. Subsequently, it was integrated into two commercial software virtual environment frameworks to demonstrate its exchangeability. Furthermore, the accuracy of the LiDAR sensor model is validated by comparing the simulation and real measurement data on the time domain and on the point cloud level. The validation results show that the mean absolute percentage error (MAPE) of simulated and measured time domain signal amplitude is 1.7%. In addition, the MAPE of the number of points Npoints and mean intensity Imean values received from the virtual and real targets are 8.5% and 9.3%, respectively. To the author's knowledge, these are the smallest errors reported for the number of received points Npoints and mean intensity Imean values up until now. Moreover, the distance error derror is below the range accuracy of the actual LiDAR sensor, which is 2 cm for this use case. In addition, the proving ground measurement results are compared with the state-of-the-art LiDAR model provided by commercial software and the proposed LiDAR model to measure the presented model fidelity. The results show that the complete signal processing steps and imperfections of real LiDAR sensors need to be considered in the virtual LiDAR to obtain simulation results close to the actual sensor. Such considerable imperfections are optical losses, inherent detector effects, effects generated by the electrical amplification, and noise produced by the sunlight.

17.
Sensors (Basel) ; 22(9)2022 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-35590881

RESUMEN

Weather prediction from real-world images can be termed a complex task when targeting classification using neural networks. Moreover, the number of images throughout the available datasets can contain a huge amount of variance when comparing locations with the weather those images are representing. In this article, the capabilities of a custom built driver simulator are explored specifically to simulate a wide range of weather conditions. Moreover, the performance of a new synthetic dataset generated by the above simulator is also assessed. The results indicate that the use of synthetic datasets in conjunction with real-world datasets can increase the training efficiency of the CNNs by as much as 74%. The article paves a way forward to tackle the persistent problem of bias in vision-based datasets.


Asunto(s)
Redes Neurales de la Computación , Tiempo (Meteorología) , Recolección de Datos , Visión Ocular
18.
Sensors (Basel) ; 22(8)2022 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-35459025

RESUMEN

The design of cooperative advanced driver assistance systems (C-ADAS) involves a holistic and systemic vision that considers the bidirectional interaction among three main elements: the driver, the vehicle, and the surrounding environment. The evolution of these systems reflects this need. In this work, we present a survey of C-ADAS and describe a conceptual architecture that includes the driver, vehicle, and environment and their bidirectional interactions. We address the remote operation of this C-ADAS based on the Internet of vehicles (IoV) paradigm, as well as the involved enabling technologies. We describe the state of the art and the research challenges present in the development of C-ADAS. Finally, to quantify the performance of C-ADAS, we describe the principal evaluation mechanisms and performance metrics employed in these systems.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Equipos de Seguridad , Encuestas y Cuestionarios , Tecnología
19.
Artículo en Inglés | MEDLINE | ID: mdl-35270777

RESUMEN

Machine and deep learning techniques are two branches of artificial intelligence that have proven very efficient in solving advanced human problems. The automotive industry is currently using this technology to support drivers with advanced driver assistance systems. These systems can assist various functions for proper driving and estimate drivers' capability of stable driving behavior and road safety. Many studies have proved that the driver's emotions are the significant factors that manage the driver's behavior, leading to severe vehicle collisions. Therefore, continuous monitoring of drivers' emotions can help predict their behavior to avoid accidents. A novel hybrid network architecture using a deep neural network and support vector machine has been developed to predict between six and seven driver's emotions in different poses, occlusions, and illumination conditions to achieve this goal. To determine the emotions, a fusion of Gabor and LBP features has been utilized to find the features and been classified using a support vector machine classifier combined with a convolutional neural network. Our proposed model achieved better performance accuracy of 84.41%, 95.05%, 98.57%, and 98.64% for FER 2013, CK+, KDEF, and KMU-FED datasets, respectively.


Asunto(s)
Inteligencia Artificial , Conducción de Automóvil , Accidentes de Tránsito , Emociones , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
20.
Artículo en Inglés | MEDLINE | ID: mdl-35206540

RESUMEN

Monitoring drivers' emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles' road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers' expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods.


Asunto(s)
Conducción de Automóvil , Iluminación , Accidentes de Tránsito , Emociones , Aprendizaje Automático , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA