Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(13)2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-39001034

RESUMEN

Detecting cracks in building structures is an essential practice that ensures safety, promotes longevity, and maintains the economic value of the built environment. In the past, machine learning (ML) and deep learning (DL) techniques have been used to enhance classification accuracy. However, the conventional CNN (convolutional neural network) methods incur high computational costs owing to their extensive number of trainable parameters and tend to extract only high-dimensional shallow features that may not comprehensively represent crack characteristics. We proposed a novel convolution and composite attention transformer network (CCTNet) model to address these issues. CCTNet enhances crack identification by processing more input pixels and combining convolution channel attention with window-based self-attention mechanisms. This dual approach aims to leverage the localized feature extraction capabilities of CNNs with the global contextual understanding afforded by self-attention mechanisms. Additionally, we applied an improved cross-attention module within CCTNet to increase the interaction and integration of features across adjacent windows. The performance of CCTNet on the Historical Building Crack2019, SDTNET2018, and proposed DS3 has a precision of 98.60%, 98.93%, and 99.33%, respectively. Furthermore, the training validation loss of the proposed model is close to zero. In addition, the AUC (area under the curve) is 0.99 and 0.98 for the Historical Building Crack2019 and SDTNET2018, respectively. CCTNet not only outperforms existing methodologies but also sets a new standard for the accurate, efficient, and reliable detection of cracks in building structures.

2.
Sensors (Basel) ; 24(6)2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38544278

RESUMEN

Hyperspectral image classification remains challenging despite its potential due to the high dimensionality of the data and its limited spatial resolution. To address the limited data samples and less spatial resolution issues, this research paper presents a two-scale module-based CTNet (convolutional transformer network) for the enhancement of spatial and spectral features. In the first module, a virtual RGB image is created from the HSI dataset to improve the spatial features using a pre-trained ResNeXt model trained on natural images, whereas in the second module, PCA (principal component analysis) is applied to reduce the dimensions of the HSI data. After that, spectral features are improved using an EAVT (enhanced attention-based vision transformer). The EAVT contained a multiscale enhanced attention mechanism to capture the long-range correlation of the spectral features. Furthermore, a joint module with the fusion of spatial and spectral features is designed to generate an enhanced feature vector. Through comprehensive experiments, we demonstrate the performance and superiority of the proposed approach over state-of-the-art methods. We obtained AA (average accuracy) values of 97.87%, 97.46%, 98.25%, and 84.46% on the PU, PUC, SV, and Houston13 datasets, respectively.

3.
JNMA J Nepal Med Assoc ; 62(270): 68-71, 2024 Feb 24.
Artículo en Inglés | MEDLINE | ID: mdl-38409988

RESUMEN

Introduction: Birth asphyxia causes significant morbidity and mortality among neonates, especially in low-income and middle-income countries like Nepal. However, there is a paucity of data regarding its burden. This study aimed to find the prevalence of birth asphyxia among neonates admitted to the neonatal intensive care unit of a tertiary care hospital. Methods: This descriptive cross-sectional study was conducted among neonates at a tertiary care hospital between 15 January 2022 to 14 January 2023 after obtaining ethical approval from the Institutional Review Committee. Neonates with gestational age ≥35 weeks were included and those with major congenital anomalies were excluded. A convenience sampling method was used. A point estimate was calculated at a 95% Confidence Interval. Results: Among 902 neonates, 120 (13.30%) (11.08-15.52, 95% Confidence Interval) had birth asphyxia. A total of 108 (90%) were outborn, and 84 (70%) were males. HIE stage-I, II and III were seen in 47 (39.17%), 64 (53.33%) and 9 (7.50%) of the asphyxiated neonates respectively. Poor suck 92 (76.67%), seizures 73 (60.83%) and lethargy 70 (58.33%) were common abnormal neurological findings. Death occurred in 15 (12.50%) neonates in the hospital. Conclusions: The prevalence of birth asphyxia was found to be similar to other studies done in similar settings. The high burden underscores an urgent need to implement better perinatal care and delivery room management practices. Keywords: hypoxic-ischemic encephalopathy; neonates; prevalence.


Asunto(s)
Asfixia Neonatal , Unidades de Cuidado Intensivo Neonatal , Recién Nacido , Masculino , Embarazo , Femenino , Humanos , Lactante , Estudios Transversales , Centros de Atención Terciaria , Asfixia , Asfixia Neonatal/epidemiología
4.
Environ Monit Assess ; 195(9): 1020, 2023 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-37548778

RESUMEN

Traditionally, rice leaf disease identification relies on a visual examination of abnormalities or an analytical result obtained by growing bacteria in the research lab. This method of visual evaluation is qualitative and error-prone. On the other hand, an artificial neural network system is fast and more accurate. Several pieces of research using traditional machine learning and deep convolution neural networks (CNN) have been utilized to overcome the issues. Still, these methods need more semantic contextual global and local feature extraction. Due to this, efficiency is less. Hence, in the present study, a multi-scale feature fusion-based RDTNet has been designed. The RDTNet contains two modules, and the first module extracts feature via three scales from the local binary pattern (LBP), gray, and a histogram of orient gradient (HOG) image. The second module extracts semantic global and local features through the transformer and convolution block. Furthermore, the computing cost is reduced by dividing the query into two parts and feeding them to convolution and the transformer block. The results indicate that the proposed method has a very high average precision, f1-score, and accuracy of 99.55%, 99.54%, and 99.53%, respectively. It is suggestive of improved classification accuracy using multi-scale features and the transformer. The model has also been validated on other datasets confirming that the present model can be used for real-time rice disease diagnosis. In the future, such models can be used for monitoring other crops, including wheat, tomato, and potato.


Asunto(s)
Monitoreo del Ambiente , Oryza , Productos Agrícolas , Suministros de Energía Eléctrica , Hojas de la Planta , Extractos Vegetales
5.
Sensors (Basel) ; 22(15)2022 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-35957380

RESUMEN

An expert performs bone fracture diagnosis using an X-ray image manually, which is a time-consuming process. The development of machine learning (ML), as well as deep learning (DL), has set a new path in medical image diagnosis. In this study, we proposed a novel multi-scale feature fusion of a convolution neural network (CNN) and an improved canny edge algorithm that segregate fracture and healthy bone image. The hybrid scale fracture network (SFNet) is a novel two-scale sequential DL model. This model is highly efficient for bone fracture diagnosis and takes less computation time compared to other state-of-the-art deep CNN models. The innovation behind this research is that it works with an improved canny edge algorithm to obtain edges in the images that localize the fracture region. After that, grey images and their corresponding canny edge images are fed to the proposed hybrid SFNet for training and evaluation. Furthermore, the performance is also compared with the state-of-the-art deep CNN models on a bone image dataset. Our results showed that SFNet with canny (SFNet + canny) achieved the highest accuracy, F1-score and recall of 99.12%, 99% and 100%, respectively, for bone fracture diagnosis. It showed that using a canny edge algorithm improves the performance of CNN.


Asunto(s)
Aprendizaje Profundo , Fracturas Óseas , Algoritmos , Fracturas Óseas/diagnóstico por imagen , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA