Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
medRxiv ; 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38853875

RESUMO

The left supramarginal gyrus (LSMG) may mediate attention to memory, and gauge memory state and performance. We performed a secondary analysis of 142 verbal delayed free recall experiments, in patients with medically-refractory epilepsy with electrode contacts implanted in the LSMG. In 14 of 142 experiments (in 14 of 113 patients), the cross-validated convolutional neural networks (CNNs) that used 1-dimensional(1-D) pairs of convolved high-gamma and beta tensors, derived from the LSMG recordings, could label recalled words with an area under the receiver operating curve (AUROC) of greater than 60% [range: 60-90%]. These 14 patients were distinguished by: 1) higher amplitudes of high-gamma bursts; 2) distinct electrode placement within the LSMG; and 3) superior performance compared with a CNN that used a 1-D tensor of the broadband recordings in the LSMG. In a pilot study of 7 of these patients, we also cross-validated CNNs using paired 1-D convolved high-gamma and beta tensors, from the LSMG, to: a) distinguish word encoding epochs from free recall epochs [AUC 0.6-1]; and distinguish better performance from poor performance during delayed free recall [AUC 0.5-0.86]. These experiments show that bursts of high-gamma and beta generated in the LSMG are biomarkers of verbal memory state and performance.

2.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931751

RESUMO

This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Aprendizado Profundo , Eletroencefalografia , Redes Neurais de Computação , Eletroencefalografia/métodos , Humanos , Processamento de Sinais Assistido por Computador
3.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38928692

RESUMO

This paper introduces a novel one-dimensional convolutional neural network that utilizes clinical data to accurately detect choledocholithiasis, where gallstones obstruct the common bile duct. Swift and precise detection of this condition is critical to preventing severe complications, such as biliary colic, jaundice, and pancreatitis. This cutting-edge model was rigorously compared with other machine learning methods commonly used in similar problems, such as logistic regression, linear discriminant analysis, and a state-of-the-art random forest, using a dataset derived from endoscopic retrograde cholangiopancreatography scans performed at Olive View-University of California, Los Angeles Medical Center. The one-dimensional convolutional neural network model demonstrated exceptional performance, achieving 90.77% accuracy and 92.86% specificity, with an area under the curve of 0.9270. While the paper acknowledges potential areas for improvement, it emphasizes the effectiveness of the one-dimensional convolutional neural network architecture. The results suggest that this one-dimensional convolutional neural network approach could serve as a plausible alternative to endoscopic retrograde cholangiopancreatography, considering its disadvantages, such as the need for specialized equipment and skilled personnel and the risk of postoperative complications. The potential of the one-dimensional convolutional neural network model to significantly advance the clinical diagnosis of this gallstone-related condition is notable, offering a less invasive, potentially safer, and more accessible alternative.

4.
J Oral Pathol Med ; 53(7): 415-433, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38807455

RESUMO

BACKGROUND: The purpose of this systematic review (SR) is to gather evidence on the use of machine learning (ML) models in the diagnosis of intraosseous lesions in gnathic bones and to analyze the reliability, impact, and usefulness of such models. This SR was performed in accordance with the PRISMA 2022 guidelines and was registered in the PROSPERO database (CRD42022379298). METHODS: The acronym PICOS was used to structure the inquiry-focused review question "Is Artificial Intelligence reliable for the diagnosis of intraosseous lesions in gnathic bones?" The literature search was conducted in various electronic databases, including PubMed, Embase, Scopus, Cochrane Library, Web of Science, Lilacs, IEEE Xplore, and Gray Literature (Google Scholar and ProQuest). Risk of bias assessment was performed using PROBAST, and the results were synthesized by considering the task and sampling strategy of the dataset. RESULTS: Twenty-six studies were included (21 146 radiographic images). Ameloblastomas, odontogenic keratocysts, dentigerous cysts, and periapical cysts were the most frequently investigated lesions. According to TRIPOD, most studies were classified as type 2 (randomly divided). The F1 score was presented in only 13 studies, which provided the metrics for 20 trials, with a mean of 0.71 (±0.25). CONCLUSION: There is no conclusive evidence to support the usefulness of ML-based models in the detection, segmentation, and classification of intraosseous lesions in gnathic bones for routine clinical application. The lack of detail about data sampling, the lack of a comprehensive set of metrics for training and validation, and the absence of external testing limit experiments and hinder proper evaluation of model performance.


Assuntos
Inteligência Artificial , Radiômica , Humanos , Ameloblastoma/diagnóstico por imagem , Ameloblastoma/patologia , Cisto Dentígero/diagnóstico por imagem , Doenças Maxilomandibulares/diagnóstico por imagem , Aprendizado de Máquina , Cistos Odontogênicos/diagnóstico por imagem , Cistos Odontogênicos/patologia , Reprodutibilidade dos Testes
5.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38631317

RESUMO

Introduction. The currently available dosimetry techniques in computed tomography can be inaccurate which overestimate the absorbed dose. Therefore, we aimed to provide an automated and fast methodology to more accurately calculate the SSDE usingDwobtained by using CNN from thorax and abdominal CT study images.Methods. The SSDE was determined from the 200 records files. For that purpose, patients' size was measured in two ways: (a) by developing an algorithm following the AAPM Report No. 204 methodology; and (b) using a CNN according to AAPM Report No. 220.Results. The patient's size measured by the in-house software in the region of thorax and abdomen was 27.63 ± 3.23 cm and 28.66 ± 3.37 cm, while CNN was 18.90 ± 2.6 cm and 21.77 ± 2.45 cm. The SSDE in thorax according to 204 and 220 reports were 17.26 ± 2.81 mGy and 23.70 ± 2.96 mGy for women and 17.08 ± 2.09 mGy and 23.47 ± 2.34 mGy for men. In abdomen was 18.54 ± 2.25 mGy and 23.40 ± 1.88 mGy in women and 18.37 ± 2.31 mGy and 23.84 ± 2.36 mGy in men.Conclusions. Implementing CNN-based automated methodologies can contribute to fast and accurate dose calculations, thereby improving patient-specific radiation safety in clinical practice.


Assuntos
Algoritmos , Doses de Radiação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Masculino , Feminino , Tamanho Corporal , Redes Neurais de Computação , Software , Automação , Tórax/diagnóstico por imagem , Adulto , Abdome/diagnóstico por imagem , Radiometria/métodos , Radiografia Torácica/métodos , Pessoa de Meia-Idade , Processamento de Imagem Assistida por Computador/métodos , Radiografia Abdominal/métodos , Idoso
6.
PeerJ ; 11: e16219, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37953792

RESUMO

Corals are colonial animals within the Phylum Cnidaria that form coral reefs, playing a significant role in marine environments by providing habitat for fish, mollusks, crustaceans, sponges, algae, and other organisms. Global climate changes are causing more intense and frequent thermal stress events, leading to corals losing their color due to the disruption of a symbiotic relationship with photosynthetic endosymbionts. Given the importance of corals to the marine environment, monitoring coral reefs is critical to understanding their response to anthropogenic impacts. Most coral monitoring activities involve underwater photographs, which can be costly to generate on large spatial scales and require processing and analysis that may be time-consuming. The Marine Ecology Laboratory (LECOM) at the Federal University of Rio Grande do Norte (UFRN) developed the project "#DeOlhoNosCorais" which encourages users to post photos of coral reefs on their social media (Instagram) using this hashtag, enabling people without previous scientific training to contribute to coral monitoring. The laboratory team identifies the species and gathers information on coral health along the Brazilian coast by analyzing each picture posted on social media. To optimize this process, we conducted baseline experiments for image classification and semantic segmentation. We analyzed the classification results of three different machine learning models using the Local Interpretable Model-agnostic Explanations (LIME) algorithm. The best results were achieved by combining EfficientNet for feature extraction and Logistic Regression for classification. Regarding semantic segmentation, the U-Net Pix2Pix model produced a pixel-level accuracy of 86%. Our results indicate that this tool can enhance image selection for coral monitoring purposes and open several perspectives for improving classification performance. Furthermore, our findings can be expanded by incorporating other datasets to create a tool that streamlines the time and cost associated with analyzing coral reef images across various regions.


Assuntos
Antozoários , Humanos , Animais , Antozoários/fisiologia , Recifes de Corais , Ecossistema , Crustáceos , Peixes
7.
J Forensic Sci ; 68(6): 2057-2064, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37746788

RESUMO

The objective of this study is to assess the performance of an innovative AI-powered tool for sex determination using panoramic radiographs (PR) and to explore factors affecting the performance of the convolutional neural network (CNN). The study involved 207,946 panoramic dental X-rays and their corresponding reports from 15 clinical centers in São Paulo, Brazil. The PRs were acquired with four different devices, and 58% of the patients were female. Data preprocessing included anonymizing the exams, extracting pertinent information from the reports, such as sex, age, type of dentition, and number of missing teeth, and organizing the data into a PostgreSQL database. Two neural network architectures, a standard CNN and a ResNet, were utilized for sex classification, with both undergoing hyperparameter tuning and cross-validation to ensure optimal performance. The CNN model achieved 95.02% accuracy in sex estimation, with image resolution being a significant influencing factor. The ResNet model attained over 86% accuracy in subjects older than 6 years and over 96% in those over 16 years. The algorithm performed better on female images, and the area under the curve (AUC) exceeded 96% for most age groups, except the youngest. Accuracy values were also assessed for different dentition types (deciduous, mixed, and permanent) and missing teeth. This study demonstrates the effectiveness of an AI-driven tool for sex determination using PR and emphasizes the role of image resolution, age, and sex in determining the algorithm's performance.


Assuntos
Aprendizado Profundo , Humanos , Feminino , Masculino , Radiografia Panorâmica , Brasil , Redes Neurais de Computação , Algoritmos
8.
Front Plant Sci ; 14: 1211490, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37767291

RESUMO

The limited availability of information on Chilean native flora has resulted in a lack of knowledge among the general public, and the classification of these plants poses challenges without extensive expertise. This study evaluates the performance of several Deep Learning (DL) models, namely InceptionV3, VGG19, ResNet152, and MobileNetV2, in classifying images representing Chilean native flora. The models are pre-trained on Imagenet. A dataset containing 500 images for each of the 10 classes of native flowers in Chile was curated, resulting in a total of 5000 images. The DL models were applied to this dataset, and their performance was compared based on accuracy and other relevant metrics. The findings highlight the potential of DL models to accurately classify images of Chilean native flora. The results contribute to enhancing the understanding of these plant species and fostering awareness among the general public. Further improvements and applications of DL in ecology and biodiversity research are discussed.

9.
Radiol Artif Intell ; 5(4): e220158, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37529207

RESUMO

Scoliosis is a disease estimated to affect more than 8% of adults in the United States. It is diagnosed with use of radiography by means of manual measurement of the angle between maximally tilted vertebrae on a radiograph (ie, the Cobb angle). However, these measurements are time-consuming, limiting their use in scoliosis surgical planning and postoperative monitoring. In this retrospective study, a pipeline (using the SpineTK architecture) was developed that was trained, validated, and tested on 1310 anterior-posterior images obtained with a low-dose stereoradiographic scanning system and radiographs obtained in patients with suspected scoliosis to automatically measure Cobb angles. The images were obtained at six centers (2005-2020). The algorithm measured Cobb angles on hold-out internal (n = 460) and external (n = 161) test sets with less than 2° error (intraclass correlation coefficient, 0.96) compared with ground truth measurements by two experienced radiologists. Measurements, produced in less than 0.5 second, did not differ significantly (P = .05 cutoff) from ground truth measurements, regardless of the presence or absence of surgical hardware (P = .80), age (P = .58), sex (P = .83), body mass index (P = .63), scoliosis severity (P = .44), or image type (low-dose stereoradiographic image vs radiograph; P = .51) in the patient. These findings suggest that the algorithm is highly robust across different clinical characteristics. Given its automated, rapid, and accurate measurements, this network may be used for monitoring scoliosis progression in patients. Keywords: Cobb Angle, Convolutional Neural Network, Deep Learning Algorithms, Pediatrics, Machine Learning Algorithms, Scoliosis, Spine Supplemental material is available for this article. © RSNA, 2023.

10.
Healthcare (Basel) ; 11(16)2023 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-37628493

RESUMO

In Mexico, according to data from the General Directorate of Health Information (2018), there is an annual incidence of 689 newborns with Trisomy 21, well-known as Down Syndrome. Worldwide, this incidence is estimated between 1 in every 1000 newborns, approximately. That is why this work focuses on the detection and analysis of facial emotions in children with Down Syndrome in order to predict their emotions throughout a dolphin-assisted therapy. In this work, two databases are used: Exploratory Data Analysis, with a total of 20,214 images, and the Down's Syndrome Dataset database, with 1445 images for training, validation, and testing of the neural network models. The construction of two architectures based on a Deep Convolutional Neural Network manages an efficiency of 79%, when these architectures are tested with a large reference image database. Then, the architecture that achieves better results is trained, validated, and tested in a small-image database with the facial emotions of children with Down Syndrome, obtaining an efficiency of 72%. However, this increases by 9% when the brain activity of the child is included in the training, resulting in an average precision of 81%. Using electroencephalogram (EEG) signals in a Convolutional Neural Network (CNN) along with the Down's Syndrome Dataset (DSDS) has promising advantages in the field of brain-computer interfaces. EEG provides direct access to the electrical activity of the brain, allowing for real-time monitoring and analysis of cognitive states. Integrating EEG signals into a CNN architecture can enhance learning and decision-making capabilities. It is important to note that this work has the primary objective of addressing a doubly vulnerable population, as these children also have a disability.

11.
Sensors (Basel) ; 23(14)2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37514677

RESUMO

Due to its capacity to gather vast, high-level data about human activity from wearable or stationary sensors, human activity recognition substantially impacts people's day-to-day lives. Multiple people and things may be seen acting in the video, dispersed throughout the frame in various places. Because of this, modeling the interactions between many entities in spatial dimensions is necessary for visual reasoning in the action recognition task. The main aim of this paper is to evaluate and map the current scenario of human actions in red, green, and blue videos, based on deep learning models. A residual network (ResNet) and a vision transformer architecture (ViT) with a semi-supervised learning approach are evaluated. The DINO (self-DIstillation with NO labels) is used to enhance the potential of the ResNet and ViT. The evaluated benchmark is the human motion database (HMDB51), which tries to better capture the richness and complexity of human actions. The obtained results for video classification with the proposed ViT are promising based on performance metrics and results from the recent literature. The results obtained using a bi-dimensional ViT with long short-term memory demonstrated great performance in human action recognition when applied to the HMDB51 dataset. The mentioned architecture presented 96.7 ± 0.35% and 41.0 ± 0.27% in terms of accuracy (mean ± standard deviation values) in the train and test phases of the HMDB51 dataset, respectively.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Atividades Humanas , Movimento (Física)
12.
Entropy (Basel) ; 25(4)2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-37190341

RESUMO

Automatic image description, also known as image captioning, aims to describe the elements included in an image and their relationships. This task involves two research fields: computer vision and natural language processing; thus, it has received much attention in computer science. In this review paper, we follow the Kitchenham review methodology to present the most relevant approaches to image description methodologies based on deep learning. We focused on works using convolutional neural networks (CNN) to extract the characteristics of images and recurrent neural networks (RNN) for automatic sentence generation. As a result, 53 research articles using the encoder-decoder approach were selected, focusing only on supervised learning. The main contributions of this systematic review are: (i) to describe the most relevant image description papers implementing an encoder-decoder approach from 2014 to 2022 and (ii) to determine the main architectures, datasets, and metrics that have been applied to image description.

13.
Oral Oncol ; 140: 106386, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37023561

RESUMO

INTRODUCTION: The aim of the present systematic review (SR) is to summarize Machine Learning (ML) models currently used to predict head and neck cancer (HNC) treatment-related toxicities, and to understand the impact of image biomarkers (IBMs) in prediction models (PMs). The present SR was conducted following the guidelines of the PRISMA 2022 and registered in PROSPERO database (CRD42020219304). METHODS: The acronym PICOS was used to develop the focused review question (Can PMs accurately predict HNC treatment toxicities?) and the eligibility criteria. The inclusion criteria enrolled Prediction Model Studies (PMSs) with patient cohorts that were treated for HNC and developed toxicities. Electronic database search encompassed PubMed, EMBASE, Scopus, Cochrane Library, Web of Science, LILACS, and Gray Literature (Google Scholar and ProQuest). Risk of Bias (RoB) was assessed through PROBAST and the results were synthesized based on the data format (with and without IBMs) to allow comparison. RESULTS: A total of 28 studies and 4,713 patients were included. Xerostomia was the most frequently investigated toxicity (17; 60.71 %). Sixteen (57.14 %) studies reported using radiomics features in combination with clinical or dosimetrics/dosiomics for modelling. High RoB was identified in 23 studies. Meta-analysis (MA) showed an area under the receiver operating characteristics curve (AUROC) of 0.82 for models with IBMs and 0.81 for models without IBMs (p value < 0.001), demonstrating no difference among IBM- and non-IBM-based models. DISCUSSION: The development of a PM based on sample-specific features represents patient selection bias and may affect a model's performance. Heterogeneity of the studies as well as non-standardized metrics prevent proper comparison of studies, and the absence of an independent/external test does not allow the evaluation of the model's generalization ability. CONCLUSION: IBM-featured PMs are not superior to PMs based on non-IBM predictors. The evidence was appraised as of low certainty.


Assuntos
Neoplasias de Cabeça e Pescoço , Xerostomia , Humanos , Neoplasias de Cabeça e Pescoço/tratamento farmacológico , Biomarcadores , Aprendizado de Máquina
14.
Life (Basel) ; 13(2)2023 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-36836725

RESUMO

The world has been greatly affected by the COVID-19 pandemic, causing people to remain isolated and decreasing the interaction between people. Accordingly, various measures have been taken to continue with a new normal way of life, which is why there is a need to implement the use of technologies and systems to decrease the spread of the virus. This research proposes a real-time system to identify the region of the face using preprocessing techniques and then classify the people who are using the mask, through a new convolutional neural network (CNN) model. The approach considers three different classes, assigning a different color to identify the corresponding class: green for persons using the mask correctly, yellow when used incorrectly, and red when people do not have a mask. This study validates that CNN models can be very effective in carrying out these types of tasks, identifying faces, and classifying them according to the class. The real-time system is developed using a Raspberry Pi 4, which can be used for the monitoring and alarm of humans who do not use the mask. This study mainly benefits society by decreasing the spread of the virus between people. The proposed model achieves 99.69% accuracy with the MaskedFace-Net dataset, which is very good when compared to other works in the current literature.

15.
J Therm Biol ; 112: 103444, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36796899

RESUMO

This study proposed an infrared image-based method for febrile and subfebrile people screening to comply with the society need for alternative, quick response, and effective methods for COVID-19 contagious people screening. The methodology consisted of: (i) Developing a method based on facial infrared imaging for possible COVID-19 early detection in people with and without fever (subfebrile state); (ii) Using 1206 emergency room (ER) patients to develop an algorithm for general application of the method, and (iii) Testing the method and algorithm effectiveness in 2558 cases (RT-qPCR tested for COVID-19) from 227,261 workers evaluations in five different countries. Artificial intelligence was used through a convolutional neural network (CNN) to develop the algorithm that took facial infrared images as input and classified the tested individuals in three groups: fever (high risk), subfebrile (medium risk), and no fever (low risk). The results showed that suspicious and confirmed COVID-19 (+) cases characterized by temperatures below the 37.5 °C fever threshold were identified. Also, average forehead and eye temperatures greater than 37.5 °C were not enough to detect fever similarly to the proposed CNN algorithm. Most RT-qPCR confirmed COVID-19 (+) cases found in the 2558 cases sample (17 cases/89.5%) belonged to the CNN selected subfebrile group. The COVID-19 (+) main risk factor was to be in the subfebrile group, in comparison to age, diabetes, high blood pressure, smoking and others. In sum, the proposed method was shown to be a potentially important new tool for COVID-19 (+) people screening for air travel and public places in general.


Assuntos
Viagem Aérea , COVID-19 , Humanos , Inteligência Artificial , COVID-19/diagnóstico , Algoritmos , Redes Neurais de Computação , Febre
16.
J Digit Imaging ; 36(3): 1060-1070, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36650299

RESUMO

Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.


Assuntos
Dermatologia , Oftalmologia , Humanos , Inteligência Artificial , Reprodutibilidade dos Testes , Redes Neurais de Computação
17.
Heliyon ; 9(1): e12898, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36685403

RESUMO

Demand for low lactose milk and milk products has been increasing worldwide due to the high number of people with lactose intolerance. These low lactose dairy foods require fast, low-cost and efficient methods for sugar quantification. However, available methods do not meet all these requirements. In this work, we propose the association of FTIR (Fourier Transform Infrared) spectroscopy with artificial intelligence to identify and quantify residual lactose and other sugars in milk. Convolutional neural networks (CNN) were built from the infrared spectra without preprocessing the data using hyperparameter adjustment and saliency map. For the quantitative prediction of the sugars in milk, a regression model was proposed, while for the qualitative assessment, a classification model was used. Raw, pasteurized and ultra-high temperature (UHT) milk was added with lactose, glucose, and galactose in six concentrations (0.1-7.0 mg mL-1) and, in total, 432 samples were submitted to convolutional neural network. Accuracy, precision, sensitivity, specificity, root mean square error, mean square error, mean absolute error, and coefficient of determination (R2) were used as evaluation parameters. The algorithms indicated a predictive capacity (accuracy) above 95% for classification, and R2 of 81%, 86%, and 92% for respectively, lactose, glucose, and galactose quantification. Our results showed that the association of FTIR spectra with artificial intelligence tools, such as CNN, is an efficient, quick, and low-cost methodology for quantifying lactose and other sugars in milk.

18.
Appl Soft Comput ; 134: 110014, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36687763

RESUMO

Coronavirus Disease-2019 (COVID-19) causes Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2) and has opened several challenges for research concerning diagnosis and treatment. Chest X-rays and computed tomography (CT) scans are effective and fast alternatives to detect and assess the damage that COVID causes to the lungs at different stages of the disease. Although the CT scan is an accurate exam, the chest X-ray is still helpful due to the cheaper, faster, lower radiation exposure, and is available in low-incoming countries. Computer-aided diagnostic systems based on Artificial Intelligence (AI) and computer vision are an alternative to extract features from X-ray images, providing an accurate COVID-19 diagnosis. However, specialized and expensive computational resources come across as challenging. Also, it needs to be better understood how low-cost devices and smartphones can hold AI models to predict diseases timely. Even using deep learning to support image-based medical diagnosis, challenges still need to be addressed once the known techniques use centralized intelligence on high-performance servers, making it difficult to embed these models in low-cost devices. This paper sheds light on these questions by proposing the Artificial Intelligence as a Service Architecture (AIaaS), a hybrid AI support operation, both centralized and distributed, with the purpose of enabling the embedding of already-trained models on low-cost devices or smartphones. We demonstrated the suitability of our architecture through a case study of COVID-19 diagnosis using a low-cost device. Among the main findings of this paper, we point out the performance evaluation of low-cost devices to handle COVID-19 predicting tasks timely and accurately and the quantitative performance evaluation of CNN models embodiment on low-cost devices.

19.
F1000Res ; 12: 14, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38826575

RESUMO

Background: Glaucoma and diabetic retinopathy (DR) are the leading causes of irreversible retinal damage leading to blindness. Early detection of these diseases through regular screening is especially important to prevent progression. Retinal fundus imaging serves as the principal method for diagnosing glaucoma and DR. Consequently, automated detection of eye diseases represents a significant application of retinal image analysis. Compared with classical diagnostic techniques, image classification by convolutional neural networks (CNN) exhibits potential for effective eye disease detection. Methods: This paper proposes the use of MATLAB - retrained AlexNet CNN for computerized eye diseases identification, particularly glaucoma and diabetic retinopathy, by employing retinal fundus images. The acquisition of the database was carried out through free access databases and access upon request. A transfer learning technique was employed to retrain the AlexNet CNN for non-disease (Non_D), glaucoma (Sus_G) and diabetic retinopathy (Sus_R) classification. Moreover, model benchmarking was conducted using ResNet50 and GoogLeNet architectures. A Grad-CAM analysis is also incorporated for each eye condition examined. Results: Metrics for validation accuracy, false positives, false negatives, precision, and recall were reported. Validation accuracies for the NetTransfer (I-V) and netAlexNet ranged from 89.7% to 94.3%, demonstrating varied effectiveness in identifying Non_D, Sus_G, and Sus_R categories, with netAlexNet achieving a 93.2% accuracy in the benchmarking of models against netResNet50 at 93.8% and netGoogLeNet at 90.4%. Conclusions: This study demonstrates the efficacy of using a MATLAB-retrained AlexNet CNN for detecting glaucoma and diabetic retinopathy. It emphasizes the need for automated early detection tools, proposing CNNs as accessible solutions without replacing existing technologies.


Assuntos
Retinopatia Diabética , Glaucoma , Redes Neurais de Computação , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/diagnóstico , Glaucoma/diagnóstico , Glaucoma/diagnóstico por imagem , Inteligência Artificial
20.
São Paulo med. j ; São Paulo med. j;140(6): 837-845, Nov.-Dec. 2022. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1410230

RESUMO

ABSTRACT BACKGROUND: Artificial intelligence (AI) deals with development of algorithms that seek to perceive one's environment and perform actions that maximize one's chance of successfully reaching one's predetermined goals. OBJECTIVE: To provide an overview of the basic principles of AI and its main studies in the fields of glaucoma, retinopathy of prematurity, age-related macular degeneration and diabetic retinopathy. From this perspective, the limitations and potential challenges that have accompanied the implementation and development of this new technology within ophthalmology are presented. DESIGN AND SETTING: Narrative review developed by a research group at the Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil. METHODS: We searched the literature on the main applications of AI within ophthalmology, using the keywords "artificial intelligence", "diabetic retinopathy", "macular degeneration age-related", "glaucoma" and "retinopathy of prematurity," covering the period from January 1, 2007, to May 3, 2021. We used the MEDLINE database (via PubMed) and the LILACS database (via Virtual Health Library) to identify relevant articles. RESULTS: We retrieved 457 references, of which 47 were considered eligible for intensive review and critical analysis. CONCLUSION: Use of technology, as embodied in AI algorithms, is a way of providing an increasingly accurate service and enhancing scientific research. This forms a source of complement and innovation in relation to the daily skills of ophthalmologists. Thus, AI adds technology to human expertise.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA