Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Diagnostics (Basel) ; 13(24)2023 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-38132255

RESUMEN

The medical field is experiencing remarkable advancements, notably with the latest technologies-artificial intelligence (AI), big data, high-performance computing (HPC), and high-throughput computing (HTC)-that are in place to offer groundbreaking solutions to support medical professionals in the diagnostic process [...].

2.
Data Brief ; 50: 109524, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37732295

RESUMEN

A dataset of fully labeled images of 20 different kinds of fruits is developed for research purposes in the area of detection, recognition, and classification of fruits. Applications can range from fruit recognition to calorie estimation, and other innovative applications. Using this dataset, researchers are given the opportunity to research and develop automatic systems for the detection and recognition of fruit images using deep learning algorithms, computer vision, and machine learning algorithms. The main contribution is a very large dataset of fully labeled images that are publicly accessible and available for all researchers free of charge. The dataset is called "DeepFruit", which consists of 21,122 fruit images for 8 different fruit set combinations. Each image contains a different combination of four or five fruits. The fruit images were captured on different plate sizes, shapes, and colors with varying angles, brightness levels, and distances. The dataset images were captured with various angles and distances but could be cleared by utilizing the preprocessing techniques that allow for noise removal, centering of the image, and others. Preprocessing was done on the dataset such as image rotation & cropping, scale normalization, and others to make the images uniform. The dataset is randomly partitioned into an 80% training set (16,899 images) and a 20% testing set (4,223 images). The dataset along with the labels is publicly accessible at: https://data.mendeley.com/datasets/5prc54r4rt.

3.
Diagnostics (Basel) ; 13(8)2023 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-37189481

RESUMEN

One of the most common and challenging medical conditions to deal with in old-aged people is the occurrence of knee osteoarthritis (KOA). Manual diagnosis of this disease involves observing X-ray images of the knee area and classifying it under five grades using the Kellgren-Lawrence (KL) system. This requires the physician's expertise, suitable experience, and a lot of time, and even after that the diagnosis can be prone to errors. Therefore, researchers in the ML/DL domain have employed the capabilities of deep neural network (DNN) models to identify and classify KOA images in an automated, faster, and accurate manner. To this end, we propose the application of six pretrained DNN models, namely, VGG16, VGG19, ResNet101, MobileNetV2, InceptionResNetV2, and DenseNet121 for KOA diagnosis using images obtained from the Osteoarthritis Initiative (OAI) dataset. More specifically, we perform two types of classification, namely, a binary classification, which detects the presence or absence of KOA and secondly, classifying the severity of KOA in a three-class classification. For a comparative analysis, we experiment on three datasets (Dataset I, Dataset II, and Dataset III) with five, two, and three classes of KOA images, respectively. We achieved maximum classification accuracies of 69%, 83%, and 89%, respectively, with the ResNet101 DNN model. Our results show an improved performance from the existing work in the literature.

4.
Med Biol Eng Comput ; 61(1): 45-59, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36323980

RESUMEN

Early detection and diagnosis of brain tumors are essential for early intervention and eventually successful treatment plans leading to either a full recovery or an increase in the patient lifespan. However, diagnosis of brain tumors is not an easy task since it requires highly skilled professionals, making this procedure both costly and time-consuming. The diagnosis process relying on MR images gets even harder in the presence of similar objects in terms of their density, size, and shape. No matter how skilled professionals are, their task is still prone to human error. The main aim of this work is to propose a system that can automatically classify and diagnose glioma brain tumors into one of the four tumor types: (1) necrosis, (2) edema, (3) enhancing, and (4) non-enhancing. In this paper, we propose a combined texture discrete wavelet transform (DWT) and statistical features based on the first- and second-order features for the accurate classification and diagnosis of multiclass glioma tumors. Four well-known classifiers, namely, support vector machines (SVM), random forest (RF), multilayer perceptron (MLP), and naïve Bayes (NB), are used for classification. The BraTS 2018 dataset is used for the experiments, and with the combined DWT and statistical features, the RF classifier achieved the highest average accuracy whether for separated modalities or combined modalities. The highest average accuracy of 89.59% and 90.28% for HGG and LGG, respectively, was reported in this paper. It has also been observed that the proposed method outperforms similar existing methods reported in the extant literature.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Teorema de Bayes , Neoplasias Encefálicas/diagnóstico por imagen , Glioma/diagnóstico por imagen , Redes Neurales de la Computación , Análisis de Ondículas
5.
Diagnostics (Basel) ; 12(11)2022 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-36428948

RESUMEN

The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain's required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.

6.
Plants (Basel) ; 11(17)2022 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-36079612

RESUMEN

Rice is considered one the most important plants globally because it is a source of food for over half the world's population. Like other plants, rice is susceptible to diseases that may affect the quantity and quality of produce. It sometimes results in anywhere between 20-40% crop loss production. Early detection of these diseases can positively affect the harvest, and thus farmers would have to be knowledgeable about the various disease and how to identify them visually. Even then, it is an impossible task for farmers to survey the vast farmlands on a daily basis. Even if this is possible, it becomes a costly task that will, in turn, increases the price of rice for consumers. Machine learning algorithms fitted to drone technology combined with the Internet of Things (IoT) can offer a solution to this problem. In this paper, we propose a Deep Convolutional Neural Network (DCNN) transfer learning-based approach for the accurate detection and classification of rice leaf disease. The modified proposed approach includes a modified VGG19-based transfer learning method. The proposed modified system can accurately detect and diagnose six distinct classes: healthy, narrow brown spot, leaf scald, leaf blast, brown spot, and bacterial leaf blight. The highest average accuracy is 96.08% using the non-normalized augmented dataset. The corresponding precision, recall, specificity, and F1-score were 0.9620, 0.9617, 0.9921, and 0.9616, respectively. The proposed modified approach achieved significantly better results compared with similar approaches using the same dataset or similar-size datasets reported in the extant literature.

7.
Viruses ; 14(8)2022 07 28.
Artículo en Inglés | MEDLINE | ID: mdl-36016288

RESUMEN

COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , COVID-19/diagnóstico por imagen , Humanos , Neumonía/diagnóstico por imagen , SARS-CoV-2 , Tomografía Computarizada por Rayos X/métodos
8.
Diagnostics (Basel) ; 12(7)2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35885512

RESUMEN

Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification.

9.
PeerJ Comput Sci ; 8: e955, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35494816

RESUMEN

Author verification of handwritten text is required in several application domains and has drawn a lot of attention within the research community due to its importance. Though, several approaches have been proposed for the text-independent writer verification of handwritten text, none of these have addressed the problem domain where author verification is sought based on partially-damaged handwritten documents (e.g., during forensic analysis). In this paper, we propose an approach for offline text-independent writer verification of handwritten Arabic text based on individual character shapes (within the Arabic alphabet). The proposed approach enables writer verification for partially damaged documents where certain handwritten characters can still be extracted from the damaged document. We also provide a mechanism to identify which Arabic characters are more effective during the writer verification process. We have collected a new dataset, Arabic Handwritten Alphabet, Words and Paragraphs Per User (AHAWP), for this purpose in a classroom setting with 82 different users. The dataset consists of 53,199 user-written isolated Arabic characters, 8,144 Arabic words, 10,780 characters extracted from these words. Convolutional neural network (CNN) based models are developed for verification of writers based on individual characters with an accuracy of 94% for isolated character shapes and 90% for extracted character shapes. Our proposed approach provided up to 95% writer verification accuracy for partially damaged documents.

10.
Diagnostics (Basel) ; 12(4)2022 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-35454066

RESUMEN

The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.

11.
Curr Med Imaging ; 18(9): 903-918, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35040408

RESUMEN

BACKGROUND: The task of identifying a tumor in the brain is a complex problem that requires sophisticated skills and inference mechanisms to accurately locate the tumor region. The complex nature of the brain tissue makes the problem of locating, segmenting, and ultimately classifying Magnetic Resonance (MR) images a complex problem. The aim of this review paper is to consolidate the details of the most relevant and recent approaches proposed in this domain for the binary and multi-class classification of brain tumors using brain MR images. OBJECTIVE: In this review paper, a detailed summary of the latest techniques used for brain MR image feature extraction and classification is presented. A lot of research papers have been published recently with various techniques proposed for identifying an efficient method for the correct recognition and diagnosis of brain MR images. The review paper allows researchers in the field to familiarize themselves with the latest developments and be able to propose novel techniques that have not yet been explored in this research domain. In addition, the review paper will facilitate researchers who are new to machine learning algorithms for brain tumor recognition to understand the basics of the field and pave the way for them to be able to contribute to this vital field of medical research. RESULTS: In this paper, the review is performed for all recently proposed methods for both feature extraction and classification. It also identifies the combination of feature extraction methods and classification methods that, when combined, would be the most efficient technique for the recognition and diagnosis of brain tumor from MR images. In addition, the paper presents the performance metrics, particularly the recognition accuracy, of selected research published between 2017-2021.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Algoritmos , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos
12.
Diagnostics (Basel) ; 11(11)2021 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-34829319

RESUMEN

It became apparent that mankind has to learn to live with and adapt to COVID-19, especially because the developed vaccines thus far do not prevent the infection but rather just reduce the severity of the symptoms. The manual classification and diagnosis of COVID-19 pneumonia requires specialized personnel and is time consuming and very costly. On the other hand, automatic diagnosis would allow for real-time diagnosis without human intervention resulting in reduced costs. Therefore, the objective of this research is to propose a novel optimized Deep Learning (DL) approach for the automatic classification and diagnosis of COVID-19 pneumonia using X-ray images. For this purpose, a publicly available dataset of chest X-rays on Kaggle was used in this study. The dataset was developed over three stages in a quest to have a unified COVID-19 entities dataset available for researchers. The dataset consists of 21,165 anterior-to-posterior and posterior-to-anterior chest X-ray images classified as: Normal (48%), COVID-19 (17%), Lung Opacity (28%) and Viral Pneumonia (6%). Data Augmentation was also applied to increase the dataset size to enhance the reliability of results by preventing overfitting. An optimized DL approach is implemented in which chest X-ray images go through a three-stage process. Image Enhancement is performed in the first stage, followed by Data Augmentation stage and in the final stage the results are fed to the Transfer Learning algorithms (AlexNet, GoogleNet, VGG16, VGG19, and DenseNet) where the images are classified and diagnosed. Extensive experiments were performed under various scenarios, which led to achieving the highest classification accuracy of 95.63% through the application of VGG16 transfer learning algorithm on the augmented enhanced dataset with freeze weights. This accuracy was found to be better as compared to the results reported by other methods in the recent literature. Thus, the proposed approach proved superior in performance as compared with that of other similar approaches in the extant literature, and it made a valuable contribution to the body of knowledge. Although the results achieved so far are promising, further work is planned to correlate the results of the proposed approach with clinical observations to further enhance the efficiency and accuracy of COVID-19 diagnosis.

13.
Curr Med Imaging ; 17(8): 917-930, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33397241

RESUMEN

BACKGROUND: Variations of image segmentation techniques, particularly those used for Brain MRI segmentation, vary in complexity from basic standard Fuzzy C-means (FCM) to more complex and enhanced FCM techniques. OBJECTIVE: In this paper, a comprehensive review is presented on all thirteen variations of FCM segmentation techniques. In the review process, the concentration is on the use of FCM segmentation techniques for brain tumors. Brain tumor segmentation is a vital step in the process of automatically diagnosing brain tumors. Unlike segmentation of other types of images, brain tumor segmentation is a very challenging task due to the variations in brain anatomy. The low contrast of brain images further complicates this process. Early diagnosis of brain tumors is indeed beneficial to patients, doctors, and medical providers. RESULTS: FCM segmentation works on images obtained from magnetic resonance imaging (MRI) scanners, requiring minor modifications to hospital operations to early diagnose tumors as most, if not all, hospitals rely on MRI machines for brain imaging. CONCLUSION: In this paper, we critically review and summarize FCM based techniques for brain MRI segmentation.


Asunto(s)
Lógica Difusa , Procesamiento de Imagen Asistido por Computador , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética
14.
Curr Med Imaging ; 17(1): 56-63, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-32160848

RESUMEN

BACKGROUND: Detection of brain tumor is a complicated task, which requires specialized skills and interpretation techniques. Accurate brain tumor classification and segmentation from MR images provide an essential choice for medical treatments. Different objects within an MR image have similar size, shape, and density, which makes the tumor classification and segmentation even more complex. OBJECTIVE: Classification of the brain MR images into tumorous and non-tumorous using deep features and different classifiers to get higher accuracy. METHODS: In this study, a novel four-step process is proposed; pre-processing for image enhancement and compression, feature extraction using convolutional neural networks (CNN), classification using the multilayer perceptron and finally, tumor segmentation using enhanced fuzzy cmeans method. RESULTS: The system is tested on 65 cases in four modalities consisting of 40,300 MR Images obtained from the BRATS-2015 dataset. These include images of 26 Low-Grade Glioma (LGG) tumor cases and 39 High-Grade Glioma (HGG) tumor cases. The proposed CNN feature-based classification technique outperforms the existing methods by achieving an average accuracy of 98.77% and a noticeable improvement in the segmentation results are measured. CONCLUSION: The proposed method for brain MR image classification to detect Glioma Tumor detection can be adopted as it gives better results with high accuracies.


Asunto(s)
Glioma , Procesamiento de Imagen Asistido por Computador , Encéfalo , Glioma/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
15.
Data Brief ; 23: 103777, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31372425

RESUMEN

A fully-labelled dataset of Arabic Sign Language (ArSL) images is developed for research related to sign language recognition. The dataset will provide researcher the opportunity to investigate and develop automated systems for the deaf and hard of hearing people using machine learning, computer vision and deep learning algorithms. The contribution is a large fully-labelled dataset for Arabic Sign Language (ArSL) which is made publically available and free for all researchers. The dataset which is named ArSL2018 consists of 54,049 images for the 32 Arabic sign language sign and alphabets collected from 40 participants in different age groups. Different dimensions and different variations were present in images which can be cleared using pre-processing techniques to remove noise, center the image, etc. The dataset is made available publicly at https://data.mendeley.com/datasets/y7pckrw6z2/1.

16.
Curr Med Imaging Rev ; 15(7): 679-688, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-32008516

RESUMEN

BACKGROUND: An approach based on QR decomposition, to remove speckle noise from medical ultrasound images, is presented in this paper. METHODS: The speckle noisy image is segmented into small overlapping blocks. A global covariance matrix is calculated by averaging the corresponding covariances of the blocks. QR decomposition is applied to the global covariance matrix. To filter out speckle noise, the first subset of orthogonal vectors of the Q matrix is projected onto the signal subspace. The proposed approach is compared with five benchmark techniques; Homomorphic Wavelet Despeckling (HWDS), Speckle Reducing Anisotropic Diffusion (SRAD), Frost, Kuan and Probabilistic Non-Local Mean (PNLM). RESULTS AND CONCLUSION: When applied to different simulated and real ultrasound images, the QR based approach has secured maximum despeckling performance while maintaining optimal resolution and edge detection, and that is regardless of image size or nature of speckle; fine or rough.


Asunto(s)
Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Ultrasonografía/métodos , Algoritmos , Corazón/diagnóstico por imagen , Humanos , Riñón/diagnóstico por imagen , Ganglios Linfáticos/diagnóstico por imagen , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA