Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(13)2021 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-34206750

RESUMEN

Hierarchical time series is a set of data sequences organized by aggregation constraints to represent many real-world applications in research and the industry. Forecasting of hierarchical time series is a challenging and time-consuming problem owing to ensuring the forecasting consistency among the hierarchy levels based on their dimensional features. The excellent empirical performance of our Deep Long Short-Term Memory (DLSTM) approach on various forecasting tasks motivated us to extend it to solve the forecasting problem through hierarchical architectures. Toward this target, we develop the DLSTM model in auto-encoder (AE) fashion and take full advantage of the hierarchical architecture for better time series forecasting. DLSTM-AE works as an alternative approach to traditional and machine learning approaches that have been used to manipulate hierarchical forecasting. However, training a DLSTM in hierarchical architectures requires updating the weight vectors for each LSTM cell, which is time-consuming and requires a large amount of data through several dimensions. Transfer learning can mitigate this problem by training first the time series at the bottom level of the hierarchy using the proposed DLSTM-AE approach. Then, we transfer the learned features to perform synchronous training for the time series of the upper levels of the hierarchy. To demonstrate the efficiency of the proposed approach, we compare its performance with existing approaches using two case studies related to the energy and tourism domains. An evaluation of all approaches was based on two criteria, namely, the forecasting accuracy and the ability to produce coherent forecasts through through the hierarchy. In both case studies, the proposed approach attained the highest accuracy results among all counterparts and produced more coherent forecasts.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Predicción , Memoria a Largo Plazo
2.
Biomed Mater Eng ; 31(2): 73-94, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32474459

RESUMEN

BACKGROUND: A neurological disorder is one of the significant problems of the nervous system that affects the essential functions of the human brain and spinal cord. Monitoring brain activity through electroencephalography (EEG) has become an important tool in the diagnosis of brain disorders. The robust automatic classification of EEG signals is an important step towards detecting a brain disorder in its earlier stages before status deterioration. OBJECTIVE: Motivated by the computation capabilities of natural evolution strategies (NES), this paper introduces an effective automatic classification approach denoted as natural evolution optimization-based deep learning (NEODL). The proposed classifier is an ingredient in a signal processing chain that comprises other state-of-the-art techniques in a consistent framework for the purpose of automatic EEG classification. METHODS: The proposed framework consists of four steps. First, the L1-principal component analysis technique is used to enhance the raw EEG signal against any expected artifacts or noise. Second, the purified EEG signal is decomposed into a number of sub-bands by applying the wavelet transform technique where a number of spectral and statistical features are extracted. Third, the extracted features are examined using the artificial bee colony approach in order to optimally select the best features. Lastly, the selected features are treated using the proposed NEODL classifier, where the input signal is classified according to the problem at hand. RESULTS: The proposed approach is evaluated using two benchmark datasets and addresses two neurological disorder applications: epilepsy disease and motor imagery. Several experiments are conducted where the proposed classifier outperforms other deep learning techniques as well as other existing approaches. CONCLUSION: The proposed framework, including the proposed classifier (NEODL), has a promising performance in the classification of EEG signals, including epilepsy disease and motor imagery. Based on the given results, it is expected that this approach will also be useful for the identification of the epileptogenic areas in the human brain. Accordingly, it may find application in the neuro-intensive care units, epilepsy monitoring units, and practical brain-computer interface systems in clinics.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Enfermedades del Sistema Nervioso/clasificación , Procesamiento de Señales Asistido por Computador , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Encefalopatías/clasificación , Encefalopatías/diagnóstico , Interfaces Cerebro-Computador , Calibración , Aprendizaje Profundo/normas , Electroencefalografía/métodos , Electroencefalografía/normas , Humanos , Enfermedades del Sistema Nervioso/diagnóstico , Enfermedades del Sistema Nervioso/fisiopatología
3.
Sensors (Basel) ; 21(1)2020 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-33396448

RESUMEN

Food security has become an increasingly important challenge for all countries globally, particularly as the world population continues to grow and arable lands are diminishing due to urbanization. Water scarcity and lack of labor add extra negative influence on traditional agriculture and food production. The problem is getting worse in countries with arid lands and harsh climate, which exacerbates the food gap in these countries. Therefore, smart and practical solutions to promote cultivation and combat food production challenges are highly needed. As a controllable environment, greenhouses are the perfect environment to improve crops' production and quality in harsh climate regions. Monitoring and controlling greenhouse microclimate is a real problem where growers have to deal with various parameters to ensure the optimal growth of crops. This paper shows our research in which we established a multi-tier cloud-based Internet of things (IoT) platform to enhance the greenhouse microclimate. As a case study, we applied the IoT platform on cucumber cultivation in a soilless medium inside a commercial-sized greenhouse. The established platform connected all sensors, controllers, and actuators placed in the greenhouse to provide long-distance communication to monitor, control, and manage the greenhouse. The implemented platform increased the cucumber yield and enhanced its quality. Moreover, the platform improved water use efficiency and decreased consumption of electrical energy. Based on the positive impact on water use efficiency and enhancement on cucumber fruit yield and quality, the established platform seems quite suitable for the soilless greenhouse cultivation in arid regions.

4.
Sci Rep ; 9(1): 19038, 2019 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-31836728

RESUMEN

Currently, most real-world time series datasets are multivariate and are rich in dynamical information of the underlying system. Such datasets are attracting much attention; therefore, the need for accurate modelling of such high-dimensional datasets is increasing. Recently, the deep architecture of the recurrent neural network (RNN) and its variant long short-term memory (LSTM) have been proven to be more accurate than traditional statistical methods in modelling time series data. Despite the reported advantages of the deep LSTM model, its performance in modelling multivariate time series (MTS) data has not been satisfactory, particularly when attempting to process highly non-linear and long-interval MTS datasets. The reason is that the supervised learning approach initializes the neurons randomly in such recurrent networks, disabling the neurons that ultimately must properly learn the latent features of the correlated variables included in the MTS dataset. In this paper, we propose a pre-trained LSTM-based stacked autoencoder (LSTM-SAE) approach in an unsupervised learning fashion to replace the random weight initialization strategy adopted in deep LSTM recurrent networks. For evaluation purposes, two different case studies that include real-world datasets are investigated, where the performance of the proposed approach compares favourably with the deep LSTM approach. In addition, the proposed approach outperforms several reference models investigating the same case studies. Overall, the experimental results clearly show that the unsupervised pre-training approach improves the performance of deep LSTM and leads to better and faster convergence than other models.

5.
Entropy (Basel) ; 21(8)2019 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-33267477

RESUMEN

Pattern classification represents a challenging problem in machine learning and data science research domains, especially when there is a limited availability of training samples. In recent years, artificial neural network (ANN) algorithms have demonstrated astonishing performance when compared to traditional generative and discriminative classification algorithms. However, due to the complexity of classical ANN architectures, ANNs are sometimes incapable of providing efficient solutions when addressing complex distribution problems. Motivated by the mathematical definition of a quantum bit (qubit), we propose a novel autonomous perceptron model (APM) that can solve the problem of the architecture complexity of traditional ANNs. APM is a nonlinear classification model that has a simple and fixed architecture inspired by the computational superposition power of the qubit. The proposed perceptron is able to construct the activation operators autonomously after a limited number of iterations. Several experiments using various datasets are conducted, where all the empirical results show the superiority of the proposed model as a classifier in terms of accuracy and computational time when it is compared with baseline classification models.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA