Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
1.
Comput Biol Med ; 180: 108950, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39096605

RESUMEN

BACKGROUND: Detecting and analyzing Alzheimer's disease (AD) in its early stages is a crucial and significant challenge. Speech data from AD patients can aid in diagnosing AD since the speech features have common patterns independent of race and spoken language. However, previous models for diagnosing AD from speech data have often focused on the characteristics of a single language, with no guarantee of scalability to other languages. In this study, we used the same method to extract acoustic features from two language datasets to diagnose AD. METHODS: Using the Korean and English speech datasets, we used ten models capable of real-time AD and healthy control classification, regardless of language type. Four machine learning models were based on hand-crafted features, while the remaining six deep learning models utilized non-explainable features. RESULTS: The highest accuracy achieved by the machine learning models was 0.73 and 0.69 for the Korean and English speech datasets, respectively. The deep learning models' maximum achievable accuracy reached 0.75 and 0.78, with their minimum classification time of 0.01s and 0.02s. These findings reveal the models' robustness regardless of Korean and English and real-time diagnosis of AD through a 30-s voice sample. CONCLUSION: Non-explainable deep learning models that directly acquire voice representations surpassed machine learning models utilizing hand-crafted features in AD diagnosis. In addition, these AI models could confirm the possibility of extending to a language-agnostic AD diagnosis.


Asunto(s)
Enfermedad de Alzheimer , Lenguaje , Humanos , Enfermedad de Alzheimer/diagnóstico , Enfermedad de Alzheimer/clasificación , Femenino , Masculino , Anciano , Aprendizaje Profundo , Aprendizaje Automático , Habla , Diagnóstico por Computador/métodos , Anciano de 80 o más Años
2.
Sensors (Basel) ; 24(16)2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39205095

RESUMEN

This article presents a comprehensive collection of formulas and calculations for hand-crafted feature extraction of condition monitoring signals. The documented features include 123 for the time domain and 46 for the frequency domain. Furthermore, a machine learning-based methodology is presented to evaluate the performance of features in fault classification tasks using seven data sets of different rotating machines. The evaluation methodology involves using seven ranking methods to select the best ten hand-crafted features per method for each database, to be subsequently evaluated by three types of classifiers. This process is applied exhaustively by evaluation groups, combining our databases with an external benchmark. A summary table of the performance results of the classifiers is also presented, including the percentage of classification and the number of features required to achieve that value. Through graphic resources, it has been possible to show the prevalence of certain features over others, how they are associated with the database, and the order of importance assigned by the ranking methods. In the same way, finding which features have the highest appearance percentages for each database in all experiments has been possible. The results suggest that hand-crafted feature extraction is an effective technique with low computational cost and high interpretability for fault identification and diagnosis.

3.
Comput Biol Chem ; 112: 108141, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38996756

RESUMEN

Anticancer peptides(ACPs) have attracted significant interest as a novel method of treating cancer due to their ability to selectively kill cancer cells without damaging normal cells. Many artificial intelligence-based methods have demonstrated impressive performance in predicting ACPs. Nevertheless, the limitations of existing methods in feature engineering include handcrafted features driven by prior knowledge, insufficient feature extraction, and inefficient feature fusion. In this study, we propose a model based on a pretrained model, and dual-channel attentional feature fusion(DAFF), called ACP-PDAFF. Firstly, to reduce the heavy dependence on expert knowledge-based handcrafted features, binary profile features (BPF) and physicochemical properties features(PCPF) are used as inputs to the transformer model. Secondly, aimed at learning more diverse feature informations of ACPs, a pretrained model ProtBert is utilized. Thirdly, for better fusion of different feature channels, DAFF is employed. Finally, to evaluate the performance of the model, we compare it with other methods on five benchmark datasets, including ACP-Mixed-80 dataset, Main and Alternate datasets of AntiCP 2.0, LEE and Independet dataset, and ACPred-Fuse dataset. And the accuracies obtained by ACP-PDAFF are 0.86, 0.80, 0.94, 0.97 and 0.95 on five datasets, respectively, higher than existing methods by 1% to 12%. Therefore, by learning rich feature informations and effectively fusing different feature channels, ACD-PDAFF achieves outstanding performance. Our code and the datasets are available at https://github.com/wongsing/ACP-PDAFF.


Asunto(s)
Antineoplásicos , Péptidos , Péptidos/química , Antineoplásicos/química , Antineoplásicos/farmacología , Humanos
4.
Comput Biol Med ; 179: 108841, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39002317

RESUMEN

Speech emotion recognition (SER) stands as a prominent and dynamic research field in data science due to its extensive application in various domains such as psychological assessment, mobile services, and computer games, mobile services. In previous research, numerous studies utilized manually engineered features for emotion classification, resulting in commendable accuracy. However, these features tend to underperform in complex scenarios, leading to reduced classification accuracy. These scenarios include: 1. Datasets that contain diverse speech patterns, dialects, accents, or variations in emotional expressions. 2. Data with background noise. 3. Scenarios where the distribution of emotions varies significantly across datasets can be challenging. 4. Combining datasets from different sources introduce complexities due to variations in recording conditions, data quality, and emotional expressions. Consequently, there is a need to improve the classification performance of SER techniques. To address this, a novel SER framework was introduced in this study. Prior to feature extraction, signal preprocessing and data augmentation methods were applied to augment the available data, resulting in the derivation of 18 informative features from each signal. The discriminative feature set was obtained using feature selection techniques which was then utilized as input for emotion recognition using the SAVEE, RAVDESS, and EMO-DB datasets. Furthermore, this research also implemented a cross-corpus model that incorporated all speech files related to common emotions from three datasets. The experimental outcomes demonstrated the superior performance of SER framework compared to existing frameworks in the field. Notably, the framework presented in this study achieved remarkable accuracy rates across various datasets. Specifically, the proposed model obtained an accuracy of 95%, 94%,97%, and 97% on SAVEE, RAVDESS, EMO-DB and cross-corpus datasets respectively. These results underscore the significant contribution of our proposed framework to the field of SER.


Asunto(s)
Emociones , Humanos , Emociones/fisiología , Habla/fisiología , Masculino , Femenino , Software de Reconocimiento del Habla , Bases de Datos Factuales , Procesamiento de Señales Asistido por Computador
5.
BMC Med Imaging ; 24(1): 89, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38622546

RESUMEN

BACKGROUND: Accurate preoperative identification of ovarian tumour subtypes is imperative for patients as it enables physicians to custom-tailor precise and individualized management strategies. So, we have developed an ultrasound (US)-based multiclass prediction algorithm for differentiating between benign, borderline, and malignant ovarian tumours. METHODS: We randomised data from 849 patients with ovarian tumours into training and testing sets in a ratio of 8:2. The regions of interest on the US images were segmented and handcrafted radiomics features were extracted and screened. We applied the one-versus-rest method in multiclass classification. We inputted the best features into machine learning (ML) models and constructed a radiomic signature (Rad_Sig). US images of the maximum trimmed ovarian tumour sections were inputted into a pre-trained convolutional neural network (CNN) model. After internal enhancement and complex algorithms, each sample's predicted probability, known as the deep transfer learning signature (DTL_Sig), was generated. Clinical baseline data were analysed. Statistically significant clinical parameters and US semantic features in the training set were used to construct clinical signatures (Clinic_Sig). The prediction results of Rad_Sig, DTL_Sig, and Clinic_Sig for each sample were fused as new feature sets, to build the combined model, namely, the deep learning radiomic signature (DLR_Sig). We used the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) to estimate the performance of the multiclass classification model. RESULTS: The training set included 440 benign, 44 borderline, and 196 malignant ovarian tumours. The testing set included 109 benign, 11 borderline, and 49 malignant ovarian tumours. DLR_Sig three-class prediction model had the best overall and class-specific classification performance, with micro- and macro-average AUC of 0.90 and 0.84, respectively, on the testing set. Categories of identification AUC were 0.84, 0.85, and 0.83 for benign, borderline, and malignant ovarian tumours, respectively. In the confusion matrix, the classifier models of Clinic_Sig and Rad_Sig could not recognise borderline ovarian tumours. However, the proportions of borderline and malignant ovarian tumours identified by DLR_Sig were the highest at 54.55% and 63.27%, respectively. CONCLUSIONS: The three-class prediction model of US-based DLR_Sig can discriminate between benign, borderline, and malignant ovarian tumours. Therefore, it may guide clinicians in determining the differential management of patients with ovarian tumours.


Asunto(s)
Aprendizaje Profundo , Neoplasias Ováricas , Humanos , Femenino , Radiómica , Neoplasias Ováricas/diagnóstico por imagen , Ultrasonografía , Algoritmos , Estudios Retrospectivos
7.
Med Biol Eng Comput ; 62(3): 913-924, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38091162

RESUMEN

Globally, lung and colon cancers are among the most prevalent and lethal tumors. Early cancer identification is essential to increase the likelihood of survival. Histopathological images are considered an appropriate tool for diagnosing cancer, which is tedious and error-prone if done manually. Recently, machine learning methods based on feature engineering have gained prominence in automatic histopathological image classification. Furthermore, these methods are more interpretable than deep learning, which operates in a "black box" manner. In the medical profession, the interpretability of a technique is critical to gaining the trust of end users to adopt it. In view of the above, this work aims to create an accurate and interpretable machine-learning technique for the automated classification of lung and colon cancers from histopathology images. In the proposed approach, following the preprocessing steps, texture and color features are retrieved by utilizing the Haralick and Color histogram feature extraction algorithms, respectively. The obtained features are concatenated to form a single feature set. The three feature sets (texture, color, and combined features) are passed into the Light Gradient Boosting Machine (LightGBM) classifier for classification. And their performance is evaluated on the LC25000 dataset using hold-out and stratified 10-fold cross-validation (Stratified 10-FCV) techniques. With a test/hold-out set, the LightGBM with texture, color, and combined features classifies the lung and colon cancer images with 97.72%, 99.92%, and 100% accuracy respectively. In addition, a stratified 10-fold cross-validation method also revealed that LightGBM's combined or color features performed well, with an excellent mean auc_mu score and a low mean multi_logloss value. Thus, this proposed technique can help histologists detect and classify lung and colon histopathology images more efficiently, effectively, and economically, resulting in more productivity.


Asunto(s)
Neoplasias del Colon , Humanos , Neoplasias del Colon/diagnóstico por imagen , Aprendizaje Automático , Algoritmos , Pulmón/diagnóstico por imagen
8.
Diagnostics (Basel) ; 13(17)2023 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-37685321

RESUMEN

Diabetic retinopathy (DR) is a complication of diabetes that damages the delicate blood vessels of the retina and leads to blindness. Ophthalmologists rely on diagnosing the retina by imaging the fundus. The process takes a long time and needs skilled doctors to diagnose and determine the stage of DR. Therefore, automatic techniques using artificial intelligence play an important role in analyzing fundus images for the detection of the stages of DR development. However, diagnosis using artificial intelligence techniques is a difficult task and passes through many stages, and the extraction of representative features is important in reaching satisfactory results. Convolutional Neural Network (CNN) models play an important and distinct role in extracting features with high accuracy. In this study, fundus images were used for the detection of the developmental stages of DR by two proposed methods, each with two systems. The first proposed method uses GoogLeNet with SVM and ResNet-18 with SVM. The second method uses Feed-Forward Neural Networks (FFNN) based on the hybrid features extracted by first using GoogLeNet, Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP); followed by ResNet-18, FCH, GLCM and LBP. All the proposed methods obtained superior results. The FFNN network with hybrid features of ResNet-18, FCH, GLCM, and LBP obtained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.

9.
Diagnostics (Basel) ; 13(16)2023 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-37627909

RESUMEN

Brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for the diagnosis, treatment planning, and monitoring of therapeutic outcomes. Thus, this research introduces a novel hybrid approach that combines handcrafted features with convolutional neural networks (CNNs) to enhance the performance of brain tumor segmentation. In this study, handcrafted features were extracted from MRI scans that included intensity-based, texture-based, and shape-based features. In parallel, a unique CNN architecture was developed and trained to detect the features from the data automatically. The proposed hybrid method was combined with the handcrafted features and the features identified by CNN in different pathways to a new CNN. In this study, the Brain Tumor Segmentation (BraTS) challenge dataset was used to measure the performance using a variety of assessment measures, for instance, segmentation accuracy, dice score, sensitivity, and specificity. The achieved results showed that our proposed approach outperformed the traditional handcrafted feature-based and individual CNN-based methods used for brain tumor segmentation. In addition, the incorporation of handcrafted features enhanced the performance of CNN, yielding a more robust and generalizable solution. This research has significant potential for real-world clinical applications where precise and efficient brain tumor segmentation is essential. Future research directions include investigating alternative feature fusion techniques and incorporating additional imaging modalities to further improve the proposed method's performance.

10.
Sensors (Basel) ; 23(12)2023 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-37420693

RESUMEN

Solubility measurements are essential in various research and industrial fields. With the automation of processes, the importance of automatic and real-time solubility measurements has increased. Although end-to-end learning methods are commonly used for classification tasks, the use of handcrafted features is still important for specific tasks with the limited labeled images of solutions used in industrial settings. In this study, we propose a method that uses computer vision algorithms to extract nine handcrafted features from images and train a DNN-based classifier to automatically classify solutions based on their dissolution states. To validate the proposed method, a dataset was constructed using various solution images ranging from undissolved solutes in the form of fine particles to those completely covering the solution. Using the proposed method, the solubility status can be automatically screened in real time by using a display and camera on a tablet or mobile phone. Therefore, by combining an automatic solubility changing system with the proposed method, a fully automated process could be achieved without human intervention.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Solubilidad , Automatización
11.
Diagnostics (Basel) ; 13(11)2023 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-37296753

RESUMEN

White blood cells (WBCs) are one of the main components of blood produced by the bone marrow. WBCs are part of the immune system that protects the body from infectious diseases and an increase or decrease in the amount of any type that causes a particular disease. Thus, recognizing the WBC types is essential for diagnosing the patient's health and identifying the disease. Analyzing blood samples to determine the amount and WBC types requires experienced doctors. Artificial intelligence techniques were applied to analyze blood samples and classify their types to help doctors distinguish between types of infectious diseases due to increased or decreased WBC amounts. This study developed strategies for analyzing blood slide images to classify WBC types. The first strategy is to classify WBC types by the SVM-CNN technique. The second strategy for classifying WBC types is by SVM based on hybrid CNN features, which are called VGG19-ResNet101-SVM, ResNet101-MobileNet-SVM, and VGG19-ResNet101-MobileNet-SVM techniques. The third strategy for classifying WBC types by FFNN is based on a hybrid model of CNN and handcrafted features. With MobileNet and handcrafted features, FFNN achieved an AUC of 99.43%, accuracy of 99.80%, precision of 99.75%, specificity of 99.75%, and sensitivity of 99.68%.

12.
Multimed Syst ; 29(3): 1527-1577, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37261261

RESUMEN

The advances in human face recognition (FR) systems have recorded sublime success for automatic and secured authentication in diverse domains. Although the traditional methods have been overshadowed by face recognition counterpart during this progress, computer vision gains rapid traction, and the modern accomplishments address problems with real-world complexity. However, security threats in FR-based systems are a growing concern that offers a new-fangled track to the research community. In particular, recent past has witnessed ample instances of spoofing attacks where imposter breaches security of the system with an artifact of human face to circumvent the sensor module. Therefore, presentation attack detection (PAD) capabilities are instilled in the system for discriminating genuine and fake traits and anticipation of their impact on the overall behavior of the FR-based systems. To scrutinize exhaustively the current state-of-the-art efforts, provide insights, and identify potential research directions on face PAD mechanisms, this systematic study presents a review of face anti-spoofing techniques that use computational approaches. The study includes advancements in face PAD mechanisms ranging from traditional hardware-based solutions to up-to-date handcrafted features or deep learning-based approaches. We also present an analytical overview of face artifacts, performance protocols, and benchmark face anti-spoofing datasets. In addition, we perform analysis of the twelve recent state-of-the-art (SOTA) face PAD techniques on a common platform using identical dataset (i.e., REPLAY-ATTACK) and performance protocols (i.e., HTER and ACA). Our overall analysis investigates that despite prevalent face PAD mechanisms demonstrate potential performance, there exist some crucial issues that requisite a futuristic attention. Our analysis put forward a number of open issues such as; limited generalization to unknown attacks, inadequacy of face datasets for DL-models, training models with new fakes, efficient DL-enabled face PAD with smaller datasets, and limited discrimination of handcrafted features. Furthermore, the COVID-19 pandemic is an additional challenge to the existing face-based recognition systems, and hence to the PAD methods. Our motive is to present a complete reference of studies in this field and orient researchers to promising directions.

13.
Diagnostics (Basel) ; 13(9)2023 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-37175000

RESUMEN

Knee osteoarthritis (KOA) is a chronic disease that impedes movement, especially in the elderly, affecting more than 5% of people worldwide. KOA goes through many stages, from the mild grade that can be treated to the severe grade in which the knee must be replaced. Therefore, early diagnosis of KOA is essential to avoid its development to the advanced stages. X-rays are one of the vital techniques for the early detection of knee infections, which requires highly experienced doctors and radiologists to distinguish Kellgren-Lawrence (KL) grading. Thus, artificial intelligence techniques solve the shortcomings of manual diagnosis. This study developed three methodologies for the X-ray analysis of both the Osteoporosis Initiative (OAI) and Rani Channamma University (RCU) datasets for diagnosing KOA and discrimination between KL grades. In all methodologies, the Principal Component Analysis (PCA) algorithm was applied after the CNN models to delete the unimportant and redundant features and keep the essential features. The first methodology for analyzing x-rays and diagnosing the degree of knee inflammation uses the VGG-19 -FFNN and ResNet-101 -FFNN systems. The second methodology of X-ray analysis and diagnosis of KOA grade by Feed Forward Neural Network (FFNN) is based on the combined features of VGG-19 and ResNet-101 before and after PCA. The third methodology for X-ray analysis and diagnosis of KOA grade by FFNN is based on the fusion features of VGG-19 and handcrafted features, and fusion features of ResNet-101 and handcrafted features. For an OAI dataset with fusion features of VGG-19 and handcrafted features, FFNN obtained an AUC of 99.25%, an accuracy of 99.1%, a sensitivity of 98.81%, a specificity of 100%, and a precision of 98.24%. For the RCU dataset with the fusion features of VGG-19 and the handcrafted features, FFNN obtained an AUC of 99.07%, an accuracy of 98.20%, a sensitivity of 98.16%, a specificity of 99.73%, and a precision of 98.08%.

14.
Diagnostics (Basel) ; 13(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37238190

RESUMEN

Early detection of eye diseases is the only solution to receive timely treatment and prevent blindness. Colour fundus photography (CFP) is an effective fundus examination technique. Because of the similarity in the symptoms of eye diseases in the early stages and the difficulty in distinguishing between the type of disease, there is a need for computer-assisted automated diagnostic techniques. This study focuses on classifying an eye disease dataset using hybrid techniques based on feature extraction with fusion methods. Three strategies were designed to classify CFP images for the diagnosis of eye disease. The first method is to classify an eye disease dataset using an Artificial Neural Network (ANN) with features from the MobileNet and DenseNet121 models separately after reducing the high dimensionality and repetitive features using Principal Component Analysis (PCA). The second method is to classify the eye disease dataset using an ANN on the basis of fused features from the MobileNet and DenseNet121 models before and after reducing features. The third method is to classify the eye disease dataset using ANN based on the fused features from the MobileNet and DenseNet121 models separately with handcrafted features. Based on the fused MobileNet and handcrafted features, the ANN attained an AUC of 99.23%, an accuracy of 98.5%, a precision of 98.45%, a specificity of 99.4%, and a sensitivity of 98.75%.

15.
Diagnostics (Basel) ; 13(10)2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37238243

RESUMEN

Breast cancer is the second most common type of cancer among women, and it can threaten women's lives if it is not diagnosed early. There are many methods for detecting breast cancer, but they cannot distinguish between benign and malignant tumors. Therefore, a biopsy taken from the patient's abnormal tissue is an effective way to distinguish between malignant and benign breast cancer tumors. There are many challenges facing pathologists and experts in diagnosing breast cancer, including the addition of some medical fluids of various colors, the direction of the sample, the small number of doctors and their differing opinions. Thus, artificial intelligence techniques solve these challenges and help clinicians resolve their diagnostic differences. In this study, three techniques, each with three systems, were developed to diagnose multi and binary classes of breast cancer datasets and distinguish between benign and malignant types with 40× and 400× factors. The first technique for diagnosing a breast cancer dataset is using an artificial neural network (ANN) with selected features from VGG-19 and ResNet-18. The second technique for diagnosing breast cancer dataset is by ANN with combined features for VGG-19 and ResNet-18 before and after principal component analysis (PCA). The third technique for analyzing breast cancer dataset is by ANN with hybrid features. The hybrid features are a hybrid between VGG-19 and handcrafted; and a hybrid between ResNet-18 and handcrafted. The handcrafted features are mixed features extracted using Fuzzy color histogram (FCH), local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) methods. With the multi classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 95.86%, an accuracy of 97.3%, sensitivity of 96.75%, AUC of 99.37%, and specificity of 99.81% with images at magnification factor 400×. Whereas with the binary classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 99.74%, an accuracy of 99.7%, sensitivity of 100%, AUC of 99.85%, and specificity of 100% with images at a magnification factor 400×.

16.
Med Eng Phys ; 115: 103971, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37120169

RESUMEN

PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy. MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF. RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets. CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.


Asunto(s)
Enfermedad de Alzheimer , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Encéfalo/diagnóstico por imagen , Tomografía Computarizada por Rayos X
17.
Sensors (Basel) ; 23(4)2023 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-36850556

RESUMEN

Artificial intelligence and especially deep learning methods have achieved outstanding results for various applications in the past few years. Pain recognition is one of them, as various models have been proposed to replace the previous gold standard with an automated and objective assessment. While the accuracy of such models could be increased incrementally, the understandability and transparency of these systems have not been the main focus of the research community thus far. Thus, in this work, several outcomes and insights of explainable artificial intelligence applied to the electrodermal activity sensor data of the PainMonit and BioVid Heat Pain Database are presented. For this purpose, the importance of hand-crafted features is evaluated using recursive feature elimination based on impurity scores in Random Forest (RF) models. Additionally, Gradient-weighted class activation mapping is applied to highlight the most impactful features learned by deep learning models. Our studies highlight the following insights: (1) Very simple hand-crafted features can yield comparative performances to deep learning models for pain recognition, especially when properly selected with recursive feature elimination. Thus, the use of complex neural networks should be questioned in pain recognition, especially considering their computational costs; and (2) both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data to make their decision in the context of automated pain recognition.


Asunto(s)
Inteligencia Artificial , Respuesta Galvánica de la Piel , Humanos , Redes Neurales de la Computación , Investigación , Dolor/diagnóstico
18.
Sensors (Basel) ; 23(4)2023 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-36850778

RESUMEN

Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human-computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and hand-crafted techniques. We also explain a generic architecture to recognize human actions in the real world and its current prominent research topic. At long last, we are able to offer some study analysis concepts and proposals for academics. In-depth researchers of human action recognition will find this review an effective tool.


Asunto(s)
Realidad Aumentada , Reconocimiento de Normas Patrones Automatizadas , Humanos , Seguridad Computacional , Mano , Actividades Humanas
19.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36642410

RESUMEN

Anticancer peptides (ACPs) are the types of peptides that have been demonstrated to have anticancer activities. Using ACPs to prevent cancer could be a viable alternative to conventional cancer treatments because they are safer and display higher selectivity. Due to ACP identification being highly lab-limited, expensive and lengthy, a computational method is proposed to predict ACPs from sequence information in this study. The process includes the input of the peptide sequences, feature extraction in terms of ordinal encoding with positional information and handcrafted features, and finally feature selection. The whole model comprises of two modules, including deep learning and machine learning algorithms. The deep learning module contained two channels: bidirectional long short-term memory (BiLSTM) and convolutional neural network (CNN). Light Gradient Boosting Machine (LightGBM) was used in the machine learning module. Finally, this study voted the three models' classification results for the three paths resulting in the model ensemble layer. This study provides insights into ACP prediction utilizing a novel method and presented a promising performance. It used a benchmark dataset for further exploration and improvement compared with previous studies. Our final model has an accuracy of 0.7895, sensitivity of 0.8153 and specificity of 0.7676, and it was increased by at least 2% compared with the state-of-the-art studies in all metrics. Hence, this paper presents a novel method that can potentially predict ACPs more effectively and efficiently. The work and source codes are made available to the community of researchers and developers at https://github.com/khanhlee/acp-ope/.


Asunto(s)
Aprendizaje Profundo , Péptidos/uso terapéutico , Aprendizaje Automático , Algoritmos , Redes Neurales de la Computación
20.
Artículo en Chino | WPRIM (Pacífico Occidental) | ID: wpr-979189

RESUMEN

Background The retail milk tea industry is in a period of rapid development, but there is little research on its nutrient content, which restricts the nutritional guidance of milk tea. Objective To determine the levels of nutrients in best-selling handcrafted milk tea in Shanghai and analyze the nutritional characteristics. Methods In 2018 and 2021, a total of 13 handcrafted milk tea brands with ≥3 branch stores in Shanghai were selected by searching for milk tea on Meituan and Ele.me food delivery platforms, and a total of 122 types of handcrafted milk tea products were collected from the top three sales [milk tea (including all sweetness levels available), milk cover tea, and fruit tea]. National standard methods were used to detect energy, protein, fat, carbohydrate, sugar, trans fatty acid, calcium, caffeine, and tea polyphenol. Results The median energy of the milk tea samples was 310 kJ (per 100 g sample). The main sources of energy were carbohydrate and fat. The levels of energy, protein, and fat in milk cover tea and milk tea were significantly higher than those in fruit tea (P<0.05), and there was no significant difference in carbohydrate among them. The total sugar, fructose, and glucose levels in milk tea were significantly lower than those in milk cover tea and fruit tea, and the lactose level in fruit tea was significantly lower than those in milk tea and milk cover tea (P<0.05). Themedian trans fat acid level in milk cover tea was higher than that in milk tea (P<0.05). The median levels of caffeine and tea polyphenol were higher in milk tea than in milk cover tea (P<0.05). The levels of energy, carbohydrate, sucrose, total sugar, and calcium in milk tea were positively correlated with the number of ingredients added (0-3) (r=0.386, 0.371, 0.238, 0.698, 0.466, respectively, P < 0.05). The levels of energy, carbohydrate, and total sugar tended to increase with increasing sweetness (P<0.05), and total sugar was mainly sucrose, followed by fructose and glucose. The total sugar levels of the samples labeled sugar free, light sugar, half sugar, less sugar, and regular sugar were 3.40 (2.20, 4.9), 4.97 (4.25, 5.97), 5.80 (4.31, 6.88), 6.59 (5.17, 8.53), and 7.96 (6.82, 9.20) g, respectively; the proportions of the samples containing more than 0.5 g of total sugar were 93.3% for sugar free milk tea, 47.4% for light sugar milk tea, and 94.0% for regular sugar milk tea; the proportion of the regular sugar samples with sugar content greater than 10 g was 18.0% (all samples with nominal sugar content were measured per 100 g). Conclusion The retail handcrafted milk tea in Shanghai is characterized by high energy, high added sugar, high fat, and low protein. It is necessary to standardize the added sugar content and sweetness labeling, strengthen the nutrition education of milk tea, and guide residents to limit its intake.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA