Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Quant Imaging Med Surg ; 14(8): 5396-5407, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144035

RESUMEN

Background: Deep learning features (DLFs) derived from radiomics features (RFs) fused with deep learning have shown potential in enhancing diagnostic capability. However, the limited repeatability and reproducibility of DLFs across multiple centers represents a challenge in the clinically validation of these features. This study thus aimed to evaluate the repeatability and reproducibility of DLFs and their potential efficiency in differentiating subtypes of lung adenocarcinoma less than 10 mm in size and manifesting as ground-glass nodules (GGNs). Methods: A chest phantom with nodules was scanned repeatedly using different thin-slice computed tomography (TSCT) scanners with varying acquisition and reconstruction parameters. The robustness of the DLFs was measured using the concordance correlation coefficient (CCC) and intraclass correlation coefficient (ICC). A deep learning approach was used for visualizing the DLFs. To assess the clinical effectiveness and generalizability of the stable and informative DLFs, three hospitals were used to source 275 patients, in whom 405 nodules were pathologically differentially diagnosed as GGN lung adenocarcinoma less than 10 mm in size and were retrospectively reviewed for clinical validation. Results: A total of 64 DLFs were analyzed, which revealed that the variables of slice thickness and slice interval (ICC, 0.79±0.18) and reconstruction kernel (ICC, 0.82±0.07) were significantly associated with the robustness of DLFs. Feature visualization showed that the DLFs were mainly focused around the nodule areas. In the external validation, a subset of 28 robust DLFs identified as stable under all sources of variability achieved the highest area under curve [AUC =0.65, 95% confidence interval (CI): 0.53-0.76] compared to other DLF models and the radiomics model. Conclusions: Although different manufacturers and scanning schemes affect the reproducibility of DLFs, certain DLFs demonstrated excellent stability and effectively improved diagnostic the efficacy for identifying subtypes of lung adenocarcinoma. Therefore, as the first step, screening stable DLFs in multicenter DLFs research may improve diagnostic efficacy and promote the application of these features.

2.
Comput Biol Med ; 174: 108461, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38626509

RESUMEN

BACKGROUND: Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD: This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT: Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION: The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.


Asunto(s)
Tomografía de Emisión de Positrones , Máquina de Vectores de Soporte , Humanos , Tomografía de Emisión de Positrones/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/clasificación , Bases de Datos Factuales , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/clasificación , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/clasificación , Radiómica
3.
Diagnostics (Basel) ; 13(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37238180

RESUMEN

BACKGROUND: Although handcrafted radiomics features (RF) are commonly extracted via radiomics software, employing deep features (DF) extracted from deep learning (DL) algorithms merits significant investigation. Moreover, a "tensor'' radiomics paradigm where various flavours of a given feature are generated and explored can provide added value. We aimed to employ conventional and tensor DFs, and compare their outcome prediction performance to conventional and tensor RFs. METHODS: 408 patients with head and neck cancer were selected from TCIA. PET images were first registered to CT, enhanced, normalized, and cropped. We employed 15 image-level fusion techniques (e.g., dual tree complex wavelet transform (DTCWT)) to combine PET and CT images. Subsequently, 215 RFs were extracted from each tumor in 17 images (or flavours) including CT only, PET only, and 15 fused PET-CT images through the standardized-SERA radiomics software. Furthermore, a 3 dimensional autoencoder was used to extract DFs. To predict the binary progression-free-survival-outcome, first, an end-to-end CNN algorithm was employed. Subsequently, we applied conventional and tensor DFs vs. RFs as extracted from each image to three sole classifiers, namely multilayer perceptron (MLP), random-forest, and logistic regression (LR), linked with dimension reduction algorithms. RESULTS: DTCWT fusion linked with CNN resulted in accuracies of 75.6 ± 7.0% and 63.4 ± 6.7% in five-fold cross-validation and external-nested-testing, respectively. For the tensor RF-framework, polynomial transform algorithms + analysis of variance feature selector (ANOVA) + LR enabled 76.67 ± 3.3% and 70.6 ± 6.7% in the mentioned tests. For the tensor DF framework, PCA + ANOVA + MLP arrived at 87.0 ± 3.5% and 85.3 ± 5.2% in both tests. CONCLUSIONS: This study showed that tensor DF combined with proper machine learning approaches enhanced survival prediction performance compared to conventional DF, tensor and conventional RF, and end-to-end CNN frameworks.

4.
Univers Access Inf Soc ; : 1-16, 2022 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-36160370

RESUMEN

The COVID-19 pandemic increases the reliance on video conferencing applications for learning. Accessible video conferencing applications with good learning features can help people with visual impairment when they participate in online classes. This paper investigates the accessibility limitations and the available learning features of the top two current video conferencing applications, namely Zoom and MS Teams. A task-based expert review and a blind user evaluation are conducted using Web Content Accessibility Guidelines 2.1. In addition, the study identifies the application with the better learning features based on Universal Design for Learning guidelines. A set of recommendations are outlined for developing better inclusive video conferencing applications for people with visual impairment. The presented ideas can be applied to enhance the learning experience of people with visual impairment.

5.
Viruses ; 14(8)2022 07 28.
Artículo en Inglés | MEDLINE | ID: mdl-36016288

RESUMEN

COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , COVID-19/diagnóstico por imagen , Humanos , Neumonía/diagnóstico por imagen , SARS-CoV-2 , Tomografía Computarizada por Rayos X/métodos
6.
Diagnostics (Basel) ; 12(7)2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35885512

RESUMEN

Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification.

7.
Environ Sci Pollut Res Int ; 29(34): 51909-51926, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35257344

RESUMEN

Environmental microorganism (EM) offers a highly efficient, harmless, and low-cost solution to environmental pollution. They are used in sanitation, monitoring, and decomposition of environmental pollutants. However, this depends on the proper identification of suitable microorganisms. In order to fasten, lower the cost, and increase consistency and accuracy of identification, we propose the novel pairwise deep learning features (PDLFs) to analyze microorganisms. The PDLFs technique combines the capability of handcrafted and deep learning features. In this technique, we leverage the Shi and Tomasi interest points by extracting deep learning features from patches which are centered at interest points' locations. Then, to increase the number of potential features that have intermediate spatial characteristics between nearby interest points, we use Delaunay triangulation theorem and straight line geometric theorem to pair the nearby deep learning features. The potential of pairwise features is justified on the classification of EMs using SVMs, Linear discriminant analysis, Logistic regression, XGBoost and Random Forest classifier. The pairwise features obtain outstanding results of 99.17%, 91.34%, 91.32%, 91.48%, and 99.56%, which are the increase of about 5.95%, 62.40%, 62.37%, 61.84%, and 3.23% in accuracy, F1-score, recall, precision, and specificity respectively, compared to non-paired deep learning features.


Asunto(s)
Aprendizaje Profundo , Microbiología Ambiental , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos
8.
Front Neurosci ; 16: 1018005, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36620438

RESUMEN

To understand students' learning behaviors, this study uses machine learning technologies to analyze the data of interactive learning environments, and then predicts students' learning outcomes. This study adopted a variety of machine learning classification methods, quizzes, and programming system logs, found that students' learning characteristics were correlated with their learning performance when they encountered similar programming practice. In this study, we used random forest (RF), support vector machine (SVM), logistic regression (LR), and neural network (NN) algorithms to predict whether students would submit on time for the course. Among them, the NN algorithm showed the best prediction results. Education-related data can be predicted by machine learning techniques, and different machine learning models with different hyperparameters can be used to obtain better results.

9.
Math Biosci Eng ; 18(5): 5790-5815, 2021 06 25.
Artículo en Inglés | MEDLINE | ID: mdl-34517512

RESUMEN

A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.


Asunto(s)
Neoplasias Encefálicas , Glioma , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Diagnóstico por Computador , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
10.
J Clin Med ; 10(14)2021 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-34300266

RESUMEN

The COVID-19 pandemic continues to spread globally at a rapid pace, and its rapid detection remains a challenge due to its rapid infectivity and limited testing availability. One of the simply available imaging modalities in clinical routine involves chest X-ray (CXR), which is often used for diagnostic purposes. Here, we proposed a computer-aided detection of COVID-19 in CXR imaging using deep and conventional radiomic features. First, we used a 2D U-Net model to segment the lung lobes. Then, we extracted deep latent space radiomics by applying deep convolutional autoencoder (ConvAE) with internal dense layers to extract low-dimensional deep radiomics. We used Johnson-Lindenstrauss (JL) lemma, Laplacian scoring (LS), and principal component analysis (PCA) to reduce dimensionality in conventional radiomics. The generated low-dimensional deep and conventional radiomics were integrated to classify COVID-19 from pneumonia and healthy patients. We used 704 CXR images for training the entire model (i.e., U-Net, ConvAE, and feature selection in conventional radiomics). Afterward, we independently validated the whole system using a study cohort of 1597 cases. We trained and tested a random forest model for detecting COVID-19 cases through multivariate binary-class and multiclass classification. The maximal (full multivariate) model using a combination of the two radiomic groups yields performance in classification cross-validated accuracy of 72.6% (69.4-74.4%) for multiclass and 89.6% (88.4-90.7%) for binary-class classification.

11.
Biosensors (Basel) ; 10(11)2020 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-33142939

RESUMEN

Breast cancer is the most common cancer in women. Early diagnosis improves outcome and survival, which is the cornerstone of breast cancer treatment. Thermography has been utilized as a complementary diagnostic technique in breast cancer detection. Artificial intelligence (AI) has the capacity to capture and analyze the entire concealed information in thermography. In this study, we propose a method to potentially detect the immunohistochemical response to breast cancer by finding thermal heterogeneous patterns in the targeted area. In this study for breast cancer screening 208 subjects participated and normal and abnormal (diagnosed by mammography or clinical diagnosis) conditions were analyzed. High-dimensional deep thermomic features were extracted from the ResNet-50 pre-trained model from low-rank thermal matrix approximation using sparse principal component analysis. Then, a sparse deep autoencoder designed and trained for such data decreases the dimensionality to 16 latent space thermomic features. A random forest model was used to classify the participants. The proposed method preserves thermal heterogeneity, which leads to successful classification between normal and abnormal subjects with an accuracy of 78.16% (73.3-81.07%). By non-invasively capturing a thermal map of the entire tumor, the proposed method can assist in screening and diagnosing this malignancy. These thermal signatures may preoperatively stratify the patients for personalized treatment planning and potentially monitor the patients during treatment.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Aprendizaje Profundo , Vasodilatación , Inteligencia Artificial , Biomarcadores , Detección Precoz del Cáncer , Femenino , Humanos , Mamografía , Termografía
12.
Diagnostics (Basel) ; 10(8)2020 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-32781795

RESUMEN

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.

13.
Sensors (Basel) ; 19(19)2019 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-31554229

RESUMEN

The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically assessing the quality of human actions. This line of study has become popular because of its explosively emerging real-world applications, such as physical rehabilitation, assistive living for elderly people, skill training on self-learning platforms, and sports activity scoring. This paper presents a comprehensive survey of approaches and techniques in action evaluation research, including motion detection and preprocessing using skeleton data, handcrafted feature representation methods, and deep learning-based feature representation methods. The benchmark datasets from this research field and some evaluation criteria employed to validate the algorithms' performance are introduced. Finally, the authors present several promising future directions for further studies.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Humanos , Aprendizaje Automático
14.
Front Mol Biosci ; 6: 44, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31245384

RESUMEN

Development of machine learning solutions for prediction of functional and clinical significance of cancer driver genes and mutations are paramount in modern biomedical research and have gained a significant momentum in a recent decade. In this work, we integrate different machine learning approaches, including tree based methods, random forest and gradient boosted tree (GBT) classifiers along with deep convolutional neural networks (CNN) for prediction of cancer driver mutations in the genomic datasets. The feasibility of CNN in using raw nucleotide sequences for classification of cancer driver mutations was initially explored by employing label encoding, one hot encoding, and embedding to preprocess the DNA information. These classifiers were benchmarked against their tree-based alternatives in order to evaluate the performance on a relative scale. We then integrated DNA-based scores generated by CNN with various categories of conservational, evolutionary and functional features into a generalized random forest classifier. The results of this study have demonstrated that CNN can learn high level features from genomic information that are complementary to the ensemble-based predictors often employed for classification of cancer mutations. By combining deep learning-generated score with only two main ensemble-based functional features, we can achieve a superior performance of various machine learning classifiers. Our findings have also suggested that synergy of nucleotide-based deep learning scores and integrated metrics derived from protein sequence conservation scores can allow for robust classification of cancer driver mutations with a limited number of highly informative features. Machine learning predictions are leveraged in molecular simulations, protein stability, and network-based analysis of cancer mutations in the protein kinase genes to obtain insights about molecular signatures of driver mutations and enhance the interpretability of cancer-specific classification models.

15.
BMC Med Educ ; 17(1): 119, 2017 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-28705158

RESUMEN

BACKGROUND: Postpartum hemorrhage (PPH) is a major cause of maternal morbidity and mortality. In Tanzania, PPH causes 25% of maternal deaths. Skilled attendance is crucial to saving the lives of mothers and their newborns during childbirth. This study is a follow-up after multi-professional simulation training on PPH in northern Tanzania. The purpose was to enhance understanding and gain knowledge of important learning features and outcomes related to multi-professional simulation training on PPH. METHODS: The study had a descriptive and exploratory design. After the second annual simulation training at two hospitals in northern Tanzania, ten focus group discussions comprising 42 nurse midwives, doctors, and medical attendants, were carried out. A semi-structured interview guide was used during the discussions, which were audio-taped for qualitative content analysis of manifest content. RESULTS: The most important findings from the focus group discussions were the importance of team training as learning feature, and the perception of improved ability to use a teamwork approach to PPH. Regardless of profession and job tasks, the informants expressed enhanced self-efficacy and reduced perception of stress. The informants perceived that improved competence enabled them to provide efficient PPH management for improved maternal health. They recommended simulation training to be continued and disseminated. CONCLUSION: Learning features, such as training in teams, skills training, and realistic repeated scenarios with consecutive debriefing for reflective learning, including a systems approach to human error, were crucial for enhanced teamwork. Informants' confidence levels increased, their stress levels decreased, and they were confident that they offered better maternal services after training.


Asunto(s)
Competencia Clínica/normas , Educación Médica Continua/normas , Partería/educación , Obstetricia/educación , Hemorragia Posparto/prevención & control , Entrenamiento Simulado , Adulto , Actitud del Personal de Salud , Parto Obstétrico , Femenino , Estudios de Seguimiento , Humanos , Recién Nacido , Partería/normas , Obstetricia/normas , Grupo de Atención al Paciente/normas , Hemorragia Posparto/terapia , Embarazo , Evaluación de Programas y Proyectos de Salud , Investigación Cualitativa , Mejoramiento de la Calidad/normas , Tanzanía
16.
Cogn Sci ; 38(6): 1078-101, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23800216

RESUMEN

It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors. One of these generative models, the restricted Boltzmann machine (RBM), has no connections between its hidden units and this makes perceptual inference and learning much simpler. More significantly, after a layer of hidden features has been learned, the activities of these features can be used as training data for another RBM. By applying this idea recursively, it is possible to learn a deep hierarchy of progressively more complicated features without requiring any labeled data. This deep hierarchy can then be treated as a feedforward neural network which can be discriminatively fine-tuned using backpropagation. Using a stack of RBMs to initialize the weights of a feedforward neural network allows backpropagation to work effectively in much deeper networks and it leads to much better generalization. A stack of RBMs can also be used to initialize a deep Boltzmann machine that has many hidden layers. Combining this initialization method with a new method for fine-tuning the weights finally leads to the first efficient way of training Boltzmann machines with many hidden layers and millions of weights.


Asunto(s)
Aprendizaje , Modelos Neurológicos , Redes Neurales de la Computación , Inteligencia Artificial , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA