Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.322
Filtrar
1.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 124968, 2025 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-39153348

RESUMEN

Ultraviolet-visible (UV-Vis) absorption spectroscopy, due to its high sensitivity and capability for real-time online monitoring, is one of the most promising tools for the rapid identification of external water in rainwater pipe networks. However, difficulties in obtaining actual samples lead to insufficient real samples, and the complex composition of wastewater can affect the accurate traceability analysis of external water in rainwater pipe networks. In this study, a new method for identifying external water in rainwater pipe networks with a small number of samples is proposed. In this method, the Generative Adversarial Network (GAN) algorithm was initially used to generate spectral data from the absorption spectra of water samples; subsequently, the multiplicative scatter correction (MSC) algorithm was applied to process the UV-Vis absorption spectra of different types of water samples; following this, the Variational Mode Decomposition (VMD) algorithm was employed to decompose and recombine the spectra after MSC; and finally, the long short-term memory (LSTM) algorithm was used to establish the identification model between the recombined spectra and the water source types, and to determine the optimal number of decomposed spectra K. The research results show that when the number of decomposed spectra K is 5, the identification accuracy for different sources of domestic sewage, surface water, and industrial wastewater is the highest, with an overall accuracy of 98.81%. Additionally, the performance of this method was validated by mixed water samples (combinations of rainwater and domestic sewage, rainwater and surface water, and rainwater and industrial wastewater). The results indicate that the accuracy of the proposed method in identifying the source of external water in rainwater reaches 98.99%, with detection time within 10 s. Therefore, the proposed method can become a potential approach for rapid identification and traceability analysis of external water in rainwater pipe networks.

2.
J Integr Bioinform ; 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39238451

RESUMEN

Drug therapy remains the primary approach to treating tumours. Variability among cancer patients, including variations in genomic profiles, often results in divergent therapeutic responses to analogous anti-cancer drug treatments within the same cohort of cancer patients. Hence, predicting the drug response by analysing the genomic profile characteristics of individual patients holds significant research importance. With the notable progress in machine learning and deep learning, many effective methods have emerged for predicting drug responses utilizing features from both drugs and cell lines. However, these methods are inadequate in capturing a sufficient number of features inherent to drugs. Consequently, we propose a representational approach for drugs that incorporates three distinct types of features: the molecular graph, the SMILE strings, and the molecular fingerprints. In this study, a novel deep learning model, named MCMVDRP, is introduced for the prediction of cancer drug responses. In our proposed model, an amalgamation of these extracted features is performed, followed by the utilization of fully connected layers to predict the drug response based on the IC50 values. Experimental results demonstrate that the presented model outperforms current state-of-the-art models in performance.

3.
Heliyon ; 10(16): e36232, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253252

RESUMEN

This paper presents an innovative fusion model called "CALSE-LSTM," which integrates Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), self-attention mechanisms, and squeeze-and-excitation attention mechanisms to optimize the estimation accuracy of the State of Charge (SoC). The model incorporates battery historical data as input and employs a dual-attention mechanism based on CNN-LSTM to extract diverse features from the input data, thereby enhancing the model's ability to learn hidden information. To further improve model performance, we fine-tune the model parameters using the Pelican algorithm. Experiments conducted under Urban Dynamometer Driving Schedule (UDDS) conditions show that the CALSE-LSTM model achieves a Root Mean Squared Error (RMSE) of only 1.73 % in lithium battery SoC estimation, significantly better than GRU, LSTM, and CNN-LSTM models, reducing errors by 31.9 %, 31.3 %, and 15 %, respectively. Ablation experiments further confirm the effectiveness of the dual-attention mechanism and its potential to improve SoC estimation performance. Additionally, we validate the learning efficiency of CALSE-LSTM by comparing model training time with the number of iterations. Finally, in the comparative experiment with the Kalman filtering method, the model in this paper significantly improved its performance by incorporating power consumption as an additional feature input. This further verifies the accuracy of CALSE-LSTM in estimating the State of Charge (SoC) of lithium batteries.

4.
Sci Rep ; 14(1): 20622, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232053

RESUMEN

Alzheimer's Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented "Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)" is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer's detection in scale of 2-5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to "Residual Attention Network (RAN)", which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the "Attention-based Bi-LSTM". The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.


Asunto(s)
Enfermedad de Alzheimer , Imagen por Resonancia Magnética , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Imagen por Resonancia Magnética/métodos , Aprendizaje Profundo , Memoria a Corto Plazo/fisiología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Redes Neurales de la Computación , Anciano
5.
Artículo en Inglés | MEDLINE | ID: mdl-39235388

RESUMEN

Machine learning (ML) has been used to predict lower extremity joint torques from joint angles and surface electromyography (sEMG) signals. This study trained three bidirectional Long Short-Term Memory (LSTM) models, which utilize joint angle, sEMG, and combined modalities as inputs, using a publicly accessible dataset to estimate joint torques during normal walking and assessed the performance of models, that used specific inputs independently plus the accuracy of the joint-specific torque prediction. The performance of each model was evaluated using normalized root mean square error (nRMSE) and Pearson correlation coefficient (PCC). Each model's median scores for the PCC and nRMSE values were highly convergent and the bulk of the mean nRMSE values of all joints were less than 10%. The ankle joint torque was the most successfully predicted output, having a mean nRMSE of less than 9% for all models. The knee joint torque prediction has reached the highest accuracy with a mean nRMSE of 11% and the hip joint torque prediction of 10%. The PCC values of each model were significantly high and remarkably comparable for the ankle (∼ 0.98), knee (∼ 0.92), and hip (∼ 0.95) joints. The model obtained significantly close accuracy with single and combined input modalities, indicating that one of either input may be sufficient for predicting the torque of a particular joint, obviating the need for the other in certain contexts.

6.
Heliyon ; 10(17): e36519, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39263075

RESUMEN

Thermal energy storage (TES) offers a practical solution for reducing industrial operation costs by load-shifting heat demands within industrial processes. In the integrated Thermomechanical pulping process, TES systems within the Energy Hub can provide heat for the paper machine, aiming to minimize electricity costs during peak hours. This strategic use of TES technology ensures more cost-effective and efficient energy consumption management, leading to overall operational savings. This research presents a novel method for optimizing the design and operation of an Energy Hub with TES in the forest industry. The proposed approach for the optimal design involves a comprehensive analysis of the dynamic efficiency, reliability, and availability of system components. The Energy Hub comprises energy conversion technologies such as an electric boiler and a steam generator heat pump. The study examines how the reliability of the industrial Energy Hub system affects operational costs and analyzes the impact of the maximum capacities of its components on system reliability. The method identifies the optimal design point for maximizing system reliability benefits. To optimize the TES system's charging/discharging schedule, an advanced predictive method using time series prediction models, including LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit), has been developed to forecast average daily electricity prices. The results highlight significant benefits from the optimal operation of TES integrated with Energy Hubs, demonstrating a 4.5-6 percent reduction in system operation costs depending on the reference year. Optimizing the Energy Hub design improves system availability, reducing operation costs due to unsupplied demand penalty costs. The system's peak availability can reach 98 %, with a maximum heat pump capacity of 2 MW and an electric boiler capacity of 3.4 MW. The GRU method showed superior accuracy in predicting electricity prices compared to LSTM, indicating its potential as a reliable electricity price predictor within the system.

7.
Sci Rep ; 14(1): 21842, 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39294219

RESUMEN

This study introduces an optimized hybrid deep learning approach that leverages meteorological data to improve short-term wind energy forecasting in desert regions. Over a year, various machine learning and deep learning models have been tested across different wind speed categories, with multiple performance metrics used for evaluation. Hyperparameter optimization for the LSTM and Conv-Dual Attention Long Short-Term Memory (Conv-DA-LSTM) architectures was performed. A comparison of the techniques indicates that the deep learning methods consistently outperform the classical techniques, with Conv-DA-LSTM yielding the best overall performance with a clear margin. This method obtained the lowest error rates (RMSE: 71.866) and the highest level of accuracy (R2: 0.93). The optimization clearly works for higher wind speeds, achieving a remarkable improvement of 22.9%. When we look at the monthly performance, all the months presented at least some level of consistent enhancement (RRMSE reductions from 1.6 to 10.2%). These findings highlight the potential of advanced deep learning techniques in enhancing wind energy forecasting accuracy, particularly in challenging desert environments. The hybrid method developed in this study presents a promising direction for improving renewable energy management. This allows for more efficient resource allocation and improves wind resource predictability.

8.
J Hazard Mater ; 479: 135709, 2024 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-39236536

RESUMEN

Ultrafiltration (UF) is widely employed for harmful algae rejection, whereas severe membrane fouling hampers its long-term operation. Herein, calcium peroxide (CaO2) and ferrate (Fe(VI)) were innovatively coupled for low-damage removal of algal contaminants and fouling control in the UF process. As a result, the terminal J/J0 increased from 0.13 to 0.66, with Rr and Rir respectively decreased by 96.74 % and 48.47 %. The cake layer filtration was significantly postponed, and pore blocking was reduced. The ζ-potential of algal foulants was weakened from -34.4 mV to -18.7 mV, and algal cells of 86.15 % were removed with flocs of 300 µm generated. The cell integrity was better remained in comparison to the Fe(VI) treatment, and Fe(IV)/Fe(V) was verified to be the dominant reactive species. The membrane fouling alleviation mechanisms could be attributed to the reduction of the fouling loads and the changes in the interfacial free energies. A membrane fouling prediction model was built based on a long short-term memory deep learning network, which predicted that the filtration volume at J/J0= 0.2 increased from 288 to 1400 mL. The results provide a new routine for controlling algal membrane fouling from the perspective of promoting the generation of Fe(IV)/Fe(V) intermediates.


Asunto(s)
Hierro , Membranas Artificiales , Peróxidos , Hierro/química , Peróxidos/química , Ultrafiltración/métodos , Purificación del Agua/métodos , Incrustaciones Biológicas/prevención & control
9.
Sensors (Basel) ; 24(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39275455

RESUMEN

Tissue hysteresivity is an important marker for determining the onset and progression of respiratory diseases, calculated from forced oscillation lung function test data. This study aims to reduce the number and duration of required measurements by combining multivariate data from various sensing devices. We propose using the Forced Oscillation Technique (FOT) lung function test in both a low-frequency prototype and the commercial RESMON device, combined with continuous monitoring from the Equivital (EQV) LifeMonitor and processed by artificial intelligence (AI) algorithms. While AI and deep learning have been employed in various aspects of respiratory system analysis, such as predicting lung tissue displacement and respiratory failure, the prediction or forecasting of tissue hysteresivity remains largely unexplored in the literature. In this work, the Long Short-Term Memory (LSTM) model is used in two ways: (1) to estimate the hysteresivity coefficient η using heart rate (HR) data collected continuously by the EQV sensor, and (2) to forecast η values by first predicting the heart rate from electrocardiogram (ECG) data. Our methodology involves a rigorous two-hour measurement protocol, with synchronized data collection from the EQV, FOT, and RESMON devices. Our results demonstrate that LSTM networks can accurately estimate the tissue hysteresivity parameter η, achieving an R2 of 0.851 and a mean squared error (MSE) of 0.296 for estimation, and forecast η with an R2 of 0.883 and an MSE of 0.528, while significantly reducing the number of required measurements by a factor of three (i.e., from ten to three) for the patient. We conclude that our novel approach minimizes patient effort by reducing the measurement time and the overall ambulatory time and costs while highlighting the potential of artificial intelligence methods in respiratory monitoring.


Asunto(s)
Inteligencia Artificial , Mecánica Respiratoria , Humanos , Mecánica Respiratoria/fisiología , Frecuencia Cardíaca/fisiología , Algoritmos , Pruebas de Función Respiratoria/métodos , Pruebas de Función Respiratoria/instrumentación , Pronóstico , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/instrumentación , Electrocardiografía/métodos
10.
Sensors (Basel) ; 24(17)2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39275513

RESUMEN

In urban road environments, global navigation satellite system (GNSS) signals may be interrupted due to occlusion by buildings and obstacles, resulting in reduced accuracy and discontinuity of combined GNSS/inertial navigation system (INS) positioning. Improving the accuracy and robustness of combined GNSS/INS positioning systems for land vehicles in the presence of GNSS interruptions is a challenging task. The main objective of this paper is to develop a method for predicting GNSS information during GNSS outages based on a long short-term memory (LSTM) neural network to assist in factor graph-based combined GNSS/INS localization, which can provide a reliable combined localization solution during GNSS signal outages. In an environment with good GNSS signals, a factor graph fusion algorithm is used for data fusion of the combined positioning system, and an LSTM neural network prediction model is trained, and model parameters are determined using the INS velocity, inertial measurement unit (IMU) output, and GNSS position incremental data. In an environment with interrupted GNSS signals, the LSTM model is used to predict the GNSS positional increments and generate the pseudo-GNSS information and the solved results of INS for combined localization. In order to verify the performance and effectiveness of the proposed method, we conducted real-world road test experiments on land vehicles installed with GNSS receivers and inertial sensors. The experimental results show that, compared with the traditional combined GNSS/INS factor graph localization method, the proposed method can provide more accurate and robust localization results even in environments with frequent GNSS signal loss.

11.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39275539

RESUMEN

Detection of abnormal situations in mobile systems not only provides predictions about risky situations but also has the potential to increase energy efficiency. In this study, two real-world drives of a battery electric vehicle and unsupervised hybrid anomaly detection approaches were developed. The anomaly detection performances of hybrid models created with the combination of Long Short-Term Memory (LSTM)-Autoencoder, the Local Outlier Factor (LOF), and the Mahalanobis distance were evaluated with the silhouette score, Davies-Bouldin index, and Calinski-Harabasz index, and the potential energy recovery rates were also determined. Two driving datasets were evaluated in terms of chaotic aspects using the Lyapunov exponent, Kolmogorov-Sinai entropy, and fractal dimension metrics. The developed hybrid models are superior to the sub-methods in anomaly detection. Hybrid Model-2 had 2.92% more successful results in anomaly detection compared to Hybrid Model-1. In terms of potential energy saving, Hybrid Model-1 provided 31.26% superiority, while Hybrid Model-2 provided 31.48%. It was also observed that there is a close relationship between anomaly and chaoticity. In the literature where cyber security and visual sources dominate in anomaly detection, a strategy was developed that provides energy efficiency-based anomaly detection and chaotic analysis from data obtained without additional sensor data.

12.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39275542

RESUMEN

Surface electromyography (sEMG) offers a novel method in human-machine interactions (HMIs) since it is a distinct physiological electrical signal that conceals human movement intention and muscle information. Unfortunately, the nonlinear and non-smooth features of sEMG signals often make joint angle estimation difficult. This paper proposes a joint angle prediction model for the continuous estimation of wrist motion angle changes based on sEMG signals. The proposed model combines a temporal convolutional network (TCN) with a long short-term memory (LSTM) network, where the TCN can sense local information and mine the deeper information of the sEMG signals, while LSTM, with its excellent temporal memory capability, can make up for the lack of the ability of the TCN to capture the long-term dependence of the sEMG signals, resulting in a better prediction. We validated the proposed method in the publicly available Ninapro DB1 dataset by selecting the first eight subjects and picking three types of wrist-dependent movements: wrist flexion (WF), wrist ulnar deviation (WUD), and wrist extension and closed hand (WECH). Finally, the proposed TCN-LSTM model was compared with the TCN and LSTM models. The proposed TCN-LSTM outperformed the TCN and LSTM models in terms of the root mean square error (RMSE) and average coefficient of determination (R2). The TCN-LSTM model achieved an average RMSE of 0.064, representing a 41% reduction compared to the TCN model and a 52% reduction compared to the LSTM model. The TCN-LSTM also achieved an average R2 of 0.93, indicating an 11% improvement over the TCN model and an 18% improvement over the LSTM model.


Asunto(s)
Electromiografía , Redes Neurales de la Computación , Articulación de la Muñeca , Humanos , Electromiografía/métodos , Articulación de la Muñeca/fisiología , Rango del Movimiento Articular/fisiología , Movimiento/fisiología , Procesamiento de Señales Asistido por Computador , Algoritmos , Adulto , Masculino , Muñeca/fisiología
13.
Artículo en Inglés | MEDLINE | ID: mdl-39290085

RESUMEN

Autism Spectrum Disorder (ASD) is a type of brain developmental disability that cannot be completely treated, but its impact can be reduced through early interventions. Early identification of neurological disorders will better assist in preserving the subjects' physical and mental health. Although numerous research works exist for detecting autism spectrum disorder, they are cumbersome and insufficient for dealing with real-time datasets. Therefore, to address these issues, this paper proposes an ASD detection mechanism using a novel Hybrid Convolutional Bidirectional Long Short-Term Memory based Water Optimization Algorithm (HCBiLSTM-WOA). The prediction efficiency of the proposed HCBiLSTM-WOA method is investigated using real-time ASD datasets containing both ASD and non-ASD data from toddlers, children, adolescents, and adults. The inconsistent and incomplete representations of the raw ASD dataset are modified using preprocessing procedures such as handling missing values, predicting outliers, data discretization, and data reduction. The preprocessed data obtained is then fed into the proposed HCBiLSTM-WOA classification model to effectively predict the non-ASD and ASD classes. The initially randomly initialized hyperparameters of the HCBiLSTM model are adjusted and tuned using the water optimization algorithm (WOA) to increase the prediction accuracy of ASD. After detecting non-ASD and ASD classes, the HCBiLSTM-WOA method further classifies the ASD cases into respective stages based on the autistic traits observed in toddlers, children, adolescents, and adults. Also, the ethical considerations that should be taken into account when campaign ASD risk communication are complex due to the data privacy and unpredictability surrounding ASD risk factors. The fusion of sophisticated deep learning techniques with an optimization algorithm presents a promising framework for ASD diagnosis. This innovative approach shows potential in effectively managing intricate ASD data, enhancing diagnostic precision, and improving result interpretation. Consequently, it offers clinicians a tool for early and precise detection, allowing for timely intervention in ASD cases. Moreover, the performance of the proposed HCBiLSTM-WOA method is evaluated using various performance indicators such as accuracy, kappa statistics, sensitivity, specificity, log loss, and Area Under the Receiver Operating Characteristics (AUROC). The simulation results reveal the superiority of the proposed HCBiLSTM-WOA method in detecting ASD compared to other existing methods. The proposed method achieves a higher ASD prediction accuracy of about 98.53% than the other methods being compared.

14.
Neural Netw ; 180: 106738, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39305782

RESUMEN

The world today has made prescriptive analytics that uses data-driven insights to guide future actions. The distribution of data, however, differs depending on the scenario, making it difficult to interpret and comprehend the data efficiently. Different neural network models are used to solve this, taking inspiration from the complex network architecture in the human brain. The activation function is crucial in introducing non-linearity to process data gradients effectively. Although popular activation functions such as ReLU, Sigmoid, Swish, and Tanh have advantages and disadvantages, they may struggle to adapt to diverse data characteristics. A generalized activation function named the Generalized Exponential Parametric Activation Function (GEPAF) is proposed to address this issue. This function consists of three parameters expressed: α, which stands for a differencing factor similar to the mean; σ, which stands for a variance to control distribution spread; and p, which is a power factor that improves flexibility; all these parameters are present in the exponent. When p=2, the activation function resembles a Gaussian function. Initially, this paper describes the mathematical derivation and validation of the properties of this function mathematically and graphically. After this, the GEPAF function is practically implemented in real-world supply chain datasets. One dataset features a small sample size but exhibits high variance, while the other shows significant variance with a moderate amount of data. An LSTM network processes the dataset for sales and profit prediction. The suggested function performs better than popular activation functions when a comparative analysis of the activation function is performed, showing at least 30% improvement in regression evaluation metrics and better loss decay characteristics.

15.
Heliyon ; 10(17): e36714, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39296184

RESUMEN

The precise assessment of shallow foundation settlement on cohesionless soils is a challenging geotechnical issue, primarily due to the significant uncertainties related to the factors influencing the settlement. This study aims to create an advanced hybrid machine learning methodology for accurately estimating shallow foundations' settlement (Sm). The initial contribution of the current research is developing and validating a robust hybrid optimization methodology based on an artificial electric field and single candidate optimizer (AEFSCO). This approach is thoroughly tested using various benchmark functions. AEFSCO will also be used to optimize three useful machine learning methods: long short-term memory (LSTM), support vector regression (SVR), and multilayer perceptron neural network (MLPNN) by adjusting their hyperparameters for predicting the settlement of shallow foundations. A database consisting of 189 individual case histories, conducted through various investigations, was used for training and testing the models. The database includes five input parameters and one output. These factors encompassed both the geometric characteristics of the foundation and the properties of the sandy soil. The results demonstrate that employing effective optimization strategies to adjust the ML models' hyperparameters can significantly improve the accuracy of predicted results. The AEFSCO has increased the coefficient of determination (R2) value of the MLPNN model by 9.3 %, the SVR model by 8 %, and the LSTM model by 22 %. Also, the LSTM-AEFSCO model is more accurate than the SVR-AEFSCO and MLPNN-AEFSCO models. This is shown by the fact that R2 went from 0.9494 to 0.9290 to 0.9903, which is an increase of 4.5 % and 6 %.

16.
PeerJ Comput Sci ; 10: e2201, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39314710

RESUMEN

Multivariate time series anomaly detection has garnered significant attention in fields such as IT operations, finance, medicine, and industry. However, a key challenge lies in the fact that anomaly patterns often exhibit multi-scale temporal variations, which existing detection models often fail to capture effectively. This limitation significantly impacts detection accuracy. To address this issue, we propose the MFAM-AD model, which combines the strengths of convolutional neural networks (CNNs) and bi-directional long short-term memory (Bi-LSTM). The MFAM-AD model is designed to enhance anomaly detection accuracy by seamlessly integrating temporal dependencies and multi-scale spatial features. Specifically, it utilizes parallel convolutional layers to extract features across different scales, employing an attention mechanism for optimal feature fusion. Additionally, Bi-LSTM is leveraged to capture time-dependent information, reconstruct the time series and enable accurate anomaly detection based on reconstruction errors. In contrast to existing algorithms that struggle with inadequate feature fusion or are confined to single-scale feature analysis, MFAM-AD effectively addresses the unique challenges of multivariate time series anomaly detection. Experimental results on five publicly available datasets demonstrate the superiority of the proposed model. Specifically, on the datasets SMAP, MSL, and SMD1-1, our MFAM-AD model has the second-highest F1 score after the current state-of-the-art DCdetector model. On the datasets NIPS-TS-SWAN and NIPS-TS-GECCO, the F1 scores of MAFM-AD are 0.046 (6.2%) and 0.09 (21.3%) higher than those of DCdetector, respectively(the value ranges from 0 to 1). These findings validate the MFAMAD model's efficacy in multivariate time series anomaly detection, highlighting its potential in various real-world applications.

17.
Front Med (Lausanne) ; 11: 1427239, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39290396

RESUMEN

The global impact of the ongoing COVID-19 pandemic, while somewhat contained, remains a critical challenge that has tested the resilience of humanity. Accurate and timely prediction of COVID-19 transmission dynamics and future trends is essential for informed decision-making in public health. Deep learning and mathematical models have emerged as promising tools, yet concerns regarding accuracy persist. This research suggests a novel model for forecasting the COVID-19's future trajectory. The model combines the benefits of machine learning models and mathematical models. The SIRVD model, a mathematical based model that depicts the reach of the infection via population, serves as basis for the proposed model. A deep prediction model for COVID-19 using XGBoost-SIRVD-LSTM is presented. The suggested approach combines Susceptible-Infected-Recovered-Vaccinated-Deceased (SIRVD), and a deep learning model, which includes Long Short-Term Memory (LSTM) and other prediction models, including feature selection using XGBoost method. The model keeps track of changes in each group's membership over time. To increase the SIRVD model's accuracy, machine learning is applied. The key properties for forecasting the spread of the infection are found using a method called feature selection. Then, in order to learn from these features and create predictions, a model involving deep learning is applied. The performance of the model proposed was assessed with prediction metrics such as R 2, root mean square error (RMSE), mean absolute percentage error (MAPE), and normalized root mean square error (NRMSE). The results are also validated to those of other prediction models. The empirical results show that the suggested model outperforms similar models. Findings suggest its potential as a valuable tool for pandemic management and public health decision-making.

18.
Environ Res ; 262(Pt 2): 119911, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39233036

RESUMEN

Establishing a highly reliable and accurate water quality prediction model is critical for effective water environment management. However, enhancing the performance of these predictive models continues to pose challenges, especially in the plain watershed with complex hydraulic conditions. This study aims to evaluate the efficacy of three traditional machine learning models versus three deep learning models in predicting the water quality of plain river networks and to develop a novel hybrid deep learning model to further improve prediction accuracy. The performance of the proposed model was assessed under various input feature sets and data temporal frequencies. The findings indicated that deep learning models outperformed traditional machine learning models in handling complex time series data. Long Short-Term Memory (LSTM) models improved the R2 by approximately 29% and lowered the Root Mean Square Error (RMSE) by about 48.6% on average. The hybrid Bayes-LSTM-GRU (Gated Recurrent Unit) model significantly enhanced prediction accuracy, reducing the average RMSE by 18.1% compared to the single LSTM model. Models trained on feature-selected datasets exhibited superior performance compared to those trained on original datasets. Higher temporal frequencies of input data generally provide more useful information. However, in datasets with numerous abrupt changes, increasing the temporal interval proves beneficial. Overall, the proposed hybrid deep learning model demonstrates an efficient and cost-effective method for improving water quality prediction performance, showing significant potential for application in managing water quality in plain watershed.

19.
Artículo en Inglés | MEDLINE | ID: mdl-39220673

RESUMEN

Glaucoma is a major cause of blindness and vision impairment worldwide, and visual field (VF) tests are essential for monitoring the conversion of glaucoma. While previous studies have primarily focused on using VF data at a single time point for glaucoma prediction, there has been limited exploration of longitudinal trajectories. Additionally, many deep learning techniques treat the time-to-glaucoma prediction as a binary classification problem (glaucoma Yes/No), resulting in the misclassification of some censored subjects into the nonglaucoma category and decreased power. To tackle these challenges, we propose and implement several deep-learning approaches that naturally incorporate temporal and spatial information from longitudinal VF data to predict time-to-glaucoma. When evaluated on the Ocular Hypertension Treatment Study (OHTS) dataset, our proposed convolutional neural network (CNN)-long short-term memory (LSTM) emerged as the top-performing model among all those examined. The implementation code can be found online (https://github.com/rivenzhou/VF_prediction).

20.
Ecotoxicol Environ Saf ; 283: 116856, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39151373

RESUMEN

Air pollution in industrial environments, particularly in the chrome plating process, poses significant health risks to workers due to high concentrations of hazardous pollutants. Exposure to substances like hexavalent chromium, volatile organic compounds (VOCs), and particulate matter can lead to severe health issues, including respiratory problems and lung cancer. Continuous monitoring and timely intervention are crucial to mitigate these risks. Traditional air quality monitoring methods often lack real-time data analysis and predictive capabilities, limiting their effectiveness in addressing pollution hazards proactively. This paper introduces a real-time air pollution monitoring and forecasting system specifically designed for the chrome plating industry. The system, supported by Internet of Things (IoT) sensors and AI approaches, detects a wide range of air pollutants, including NH3, CO, NO2, CH4, CO2, SO2, O3, PM2.5, and PM10, and provides real-time data on pollutant concentration levels. Data collected by the sensors are processed using LSTM, Random Forest, and Linear Regression models to predict pollution levels. The LSTM model achieved a coefficient of variation (R²) of 99 % and a mean absolute percentage error (MAE) of 0.33 for temperature and humidity forecasting. For PM2.5, the Random Forest model outperformed others, achieving an R² of 84 % and an MAE of 10.11. The system activates factory exhaust fans to circulate air when high pollution levels are predicted to occur in the next hours, allowing for proactive measures to improve air quality before issues arise. This innovative approach demonstrates significant advancements in industrial environmental monitoring, enabling dynamic responses to pollution and improving air quality in industrial settings.


Asunto(s)
Contaminantes Atmosféricos , Contaminación del Aire , Monitoreo del Ambiente , Predicción , Material Particulado , Monitoreo del Ambiente/métodos , Contaminación del Aire/estadística & datos numéricos , Contaminación del Aire/análisis , Contaminantes Atmosféricos/análisis , Material Particulado/análisis , Internet de las Cosas , Inteligencia Artificial , Compuestos Orgánicos Volátiles/análisis , Industrias
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA