Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Heliyon ; 10(8): e29396, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38665569

RESUMEN

Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset.

2.
Sensors (Basel) ; 22(4)2022 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-35214554

RESUMEN

Information fusion in automated vehicle for various datatypes emanating from many resources is the foundation for making choices in intelligent transportation autonomous cars. To facilitate data sharing, a variety of communication methods have been integrated to build a diverse V2X infrastructure. However, information fusion security frameworks are currently intended for specific application instances, that are insufficient to fulfill the overall requirements of Mutual Intelligent Transportation Systems (MITS). In this work, a data fusion security infrastructure has been developed with varying degrees of trust. Furthermore, in the V2X heterogeneous networks, this paper offers an efficient and effective information fusion security mechanism for multiple sources and multiple type data sharing. An area-based PKI architecture with speed provided by a Graphic Processing Unit (GPU) is given in especially for artificial neural synchronization-based quick group key exchange. A parametric test is performed to ensure that the proposed data fusion trust solution meets the stringent delay requirements of V2X systems. The efficiency of the suggested method is tested, and the results show that it surpasses similar strategies already in use.


Asunto(s)
Vehículos Autónomos , Seguridad Computacional , Automóviles , Transportes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA