Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Sci Total Environ ; 952: 175999, 2024 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-39233078

RESUMEN

Large lakes play an important role in water resource supply, regional climate regulation, and ecosystem support, but they face threats from frequent extreme drought events, necessitating an understanding of the mechanisms behind these events. In this study, we developed an explainable machine learning (ML) model that combines the Bayesian optimized (BO) long short-term memory (LSTM) model and the integrated gradients (IG) interpretation method to simulate and explain lake water level variations. In addition, the hydrological drought trends and extreme drought events in Poyang Lake from 1960 to 2022 were identified using the standardized water level index (SWI) and run theory. The analysis revealed that the frequency of hydrological droughts in Poyang Lake increased from 1960 to 2022, especially in the autumn after 2003. By selecting the flows of the catchment and the Yangtze River as the input features, the BO-LSTM model accurately predicted the water level of Poyang Lake. The IG method was then used to interpret the prediction results from three aspects: the importance ranking of the input features, their roles in the seasonal drought trends, and their roles in extreme drought events. The results indicate that (1) the most influential factor affecting the water level of Poyang Lake was the inflow of the Ganjiang River in the catchment. (2) The increase in the lake outflow caused by the Yangtze River's draining effect was the reason for the intensification of the autumn drought in Poyang Lake. (3) The extreme hydrological drought events were primarily caused by low catchment inflows. Overall, this research provides a new approach that balances prediction accuracy with interpretability for predicting and understanding the hydrological processes in large river-connected lakes. Moreover, this method was also applied to the attribution analysis of hydrological drought in Poyang Lake, providing theoretical support for regional water resource management.

2.
PeerJ Comput Sci ; 10: e2164, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145256

RESUMEN

Arbitrage trading is a common quantitative trading strategy that leverages the long-term cointegration relationships between multiple related assets to conduct spread trading for profit. Specifically, when the cointegration relationship between two or more related series holds, it utilizes the stability and mean-reverting characteristics of their cointegration relationship for spread trading. However, in real quantitative trading, determining the cointegration relationship based on the Engle-Granger two-step method imposes stringent conditions for the cointegration to hold, which can easily be disrupted by price fluctuations or trend characteristics presented by the linear combination, leading to the failure of the arbitrage strategy and significant losses. To address this issue, this article proposes an optimized strategy based on long-short-term memory (LSTM), termed Dynamic-LSTM Arb (DLA), which can classify the trend movements of linear combinations between multiple assets. It assists the Engle-Granger two-step method in determining cointegration relationships when clear upward or downward non-stationary trend characteristics emerge, avoiding frequent strategy switches that lead to losses and the invalidation of arbitrage strategies due to obvious trend characteristics. Additionally, in mean-reversion arbitrage trading, to determine the optimal trading boundary, we have designed an optimized algorithm that dynamically updates the trading boundaries. Training results indicate that our proposed optimization model can successfully filter out unprofitable trades. Through trading tests on a backtesting platform, a theoretical return of 23% was achieved over a 10-day futures trading period at a 1-min level, significantly outperforming the benchmark strategy and the returns of the CSI 300 Index during the same period.

3.
Heliyon ; 10(10): e31408, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38826753

RESUMEN

Nowadays, a wide variety of labels of items are widely available, and human consumption is increasingly tailored to meet their individual needs. So, many businesses are starting to focus on improving the functionality of modern packaging. Sensorial paradigms and emotional reactions could change during the user-product interaction lifecycle. The designer's emotional imagination and past experiences are the backbone of conventional product package design, which has limitations due to unmanageable content and an absence of professional advice-the majority of previous research on emotional image analysis aimed to forecast the most common viewer emotions. Since the feelings a picture evokes are quite individual and vary from viewer to viewer, this overarching feeling isn't always enough for practical uses. The research presented an approach to packaging design evaluation based on image emotion perception computing (PDE-IEPC), which combines emotion perception technology with a deep LSTM (Long short-term model), resulting in an immersive and dynamic experience for the human senses. Emotion Perception Computing's Dynamic Multi-task Hypergraph Learning (DMHL) approach considers graphical data, social context, spatial evolution, and location, among other criteria, to evaluate packaging designs efficiently based on their emotional impact. Image-Emotion-Social-Net is a large dataset used to evaluate multidimensional and categorical attitude representation. The dataset is sourced from Flickr and contains over 1 million images presented by over 9000 users. Personalized emotion categorization is an area where research on this dataset shows that the suggested strategy outperforms many modern techniques. The experimental results show that the proposed method achieves a high packaging design quality rate of 94.1 %, a performance success rate of 97.5 %, and a mean square error rate of 2 % compared to other existing methods.

4.
PeerJ Comput Sci ; 10: e2046, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38855247

RESUMEN

The COVID-19 pandemic has far-reaching impacts on the global economy and public health. To prevent the recurrence of pandemic outbreaks, the development of short-term prediction models is of paramount importance. We propose an ARIMA-LSTM (autoregressive integrated moving average and long short-term memory) model for predicting future cases and utilize multi-source data to enhance prediction performance. Firstly, we employ the ARIMA-LSTM model to forecast the developmental trends of multi-source data separately. Subsequently, we introduce a Bayes-Attention mechanism to integrate the prediction outcomes from auxiliary data sources into the case data. Finally, experiments are conducted based on real datasets. The results demonstrate a close correlation between predicted and actual case numbers, with superior prediction performance of this model compared to baseline and other state-of-the-art methods.

5.
Huan Jing Ke Xue ; 45(6): 3205-3213, 2024 Jun 08.
Artículo en Chino | MEDLINE | ID: mdl-38897744

RESUMEN

To improve the accuracy and stability of water quality prediction in the Pearl River Estuary, a water quality prediction model was proposed based on BiLSTM improved with an attention mechanism. The feature attention mechanism was introduced to enhance the ability of the model to capture important features, and the temporal attention mechanism was added to improve the mining ability of time series correlation information and water quality fluctuation details. The new model was applied to the water quality prediction of eight estuaries of the Pearl River, and the prediction performance test, generalization ability test, and characteristic parameter expansion test were carried out. The results showed that:① The new model achieved high prediction accuracy in the water quality prediction of the Zhuhaidaqiao section. The root-mean-square error (RMSE) between the predicted value and the measured value was 0.004 1 mg·L-1, and the coefficient of determination (R2) was 98.3 %. Compared with that of Multi-BiLSTM, Multi-LSTM, BiLSTM, and LSTM, the results showed that the new model had the highest prediction accuracy, which verified the accuracy of the model. ② Both the number of training samples and the number of forecasting steps affected the prediction accuracy of the model, and the prediction accuracy of the model increased with the increase of the training samples. When predicting the total phosphorus of the Zhuhaidaqiao section, more than 240 training samples could obtain higher prediction accuracy. Increasing the number of prediction steps caused the prediction accuracy of the model to decline rapidly, and the reliability of the model prediction could not be guaranteed when the number of prediction steps was greater than 5. ③ When the new model was applied to the prediction of different water quality indexes in eight estuaries of the Pearl River, the prediction results had high precision and the model had strong generalization ability. The input data of upstream water quality, rainfall, and other characteristic parameters associated with the section prediction index of the object could improve the prediction accuracy of the model. Through many tests, the results showed that the new model could meet the requirements of precision, applicability, and expansibility of water quality prediction in the Pearl River Estuary and thus is a new exploration method for high-precision prediction of water quality in complex hydrodynamic environments.

6.
Heliyon ; 10(4): e26158, 2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38440291

RESUMEN

The development of predictive models for infectious diseases, specifically COVID-19, is an important step in early control efforts to reduce the mortality rate. However, traditional time series prediction models used to analyze the disease spread trends often encounter challenges related to accuracy, necessitating the need to develop prediction models with enhanced accuracy. Therefore, this research aimed to develop a prediction model based on the Long Short-Term Memory (LSTM) networks to better predict the number of confirmed COVID-19 cases. The proposed optimized LSTM (popLSTM) model was compared with Basic LSTM and improved MinMaxScaler developed earlier using COVID-19 dataset taken from previous research. The dataset was collected from four countries with a high daily increase in confirmed cases, including Hong Kong, South Korea, Italy, and Indonesia. The results showed significantly improved accuracy in the optimized model compared to the previous research methods. The contributions of popLSTM included 1) Incorporating the output results on the output gate to effectively filter more detailed information compared to the previous model, and 2) Reducing the error value by considering the hidden state on the output gate to improve accuracy. popLSTM in this experiment exhibited a significant 4% increase in accuracy.

7.
Heliyon ; 10(1): e23803, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38261933

RESUMEN

Green finance plays a pivotal role in guiding and incentivizing private capital to invest in low-carbon industries and initiatives. This study utilizes data of Chinese cities from 2006 to 2022 to investigate the influence of green finance on carbon emission efficiency. The results show that green finance significantly contributes to enhancing carbon emission efficiency. The impact of green finance on carbon emission efficiency is subject to a dual threshold effect, which depends on the level of regional economic development. Regional innovation emerges as a vital channel through which green finance influences carbon emission efficacy. Moreover, the sensitivity of carbon emission efficiency to the green finance index shows an inverted U-shaped trend. Green support is most significant in green finance sub-dimensions. These findings provide valuable theoretical support for the role of green finance in fostering carbon efficiency improvement and provide essential insights for formulating effective policy strategies.

8.
Big Data ; 12(1): 49-62, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37976104

RESUMEN

Market uncertainty greatly interferes with the decisions and plans of market participants, thus increasing the risk of decision-making, leading to compromised interests of decision-makers. Cotton price index (hereinafter referred to as cotton price) volatility is highly noisy, nonlinear, and stochastic and is susceptible to supply and demand, climate, substitutes, and other policy factors, which are subject to large uncertainties. To reduce decision risk and provide decision support for policymakers, this article integrates 13 factors affecting cotton price index volatility based on existing research and further divides them into transaction data and interaction data. A long- and short-term memory (LSTM) model is constructed, and a comparison experiment is implemented to analyze the cotton price index volatility. To make the constructed model explainable, we use explainable artificial intelligence (XAI) techniques to perform statistical analysis of the input features. The experimental results show that the LSTM model can accurately analyze the cotton price index fluctuation trend but cannot accurately predict the actual price of cotton; the transaction data plus interaction data are more sensitive than the transaction data in analyzing the cotton price fluctuation trend and can have a positive effect on the cotton price fluctuation analysis. This study can accurately reflect the fluctuation trend of the cotton market, provide reference to the state, enterprises, and cotton farmers for decision-making, and reduce the risk caused by frequent fluctuation of cotton prices. The analysis of the model using XAI techniques builds the confidence of decision-makers in the model.


Asunto(s)
Inteligencia Artificial , Comercio , Humanos , Memoria a Corto Plazo
9.
Artículo en Chino | WPRIM (Pacífico Occidental) | ID: wpr-1026216

RESUMEN

A GC-LSTM model is proposed based on the characteristics of global optimization of genetic algorithm.The model automatically and iteratively searches the optimal hyper-parameter configuration of the C-LSTM model through the genetic algorithm of a specific genetic strategy,and it is configured using the genetic iteration results and validated on the MIT-BIH arrhythmia database according to the classification criteria of the Association for the Advancement of Medical Instrumentation.The testing shows that the classification accuracy,sensitivity,accuracy and F1 value of GC-LSTM model are 99.37%,95.62%,95.17%and 95.39%,respectively,higher than those of the manually established model,and it is also advantageous over the existing mainstream methods.Experimental results demonstrate that the proposed method can achieve better classification performance while avoiding a large number of experimental parameters.

10.
BMC Public Health ; 23(1): 2400, 2023 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-38042794

RESUMEN

BACKGROUND: In 2022, Omicron outbreaks occurred at multiple sites in China. It is of great importance to track the incidence trends and transmission dynamics of coronavirus disease 2019 (COVID-19) to guide further interventions. METHODS: Given the population size, economic level and transport level similarities, two groups of outbreaks (Shanghai vs. Chengdu and Sanya vs. Beihai) were selected for analysis. We developed the SEAIQRD, ARIMA, and LSTM models to seek optimal modeling techniques for waves associated with the Omicron variant regarding data predictive performance and mechanism transmission dynamics, respectively. In addition, we quantitatively modeled the impacts of different combinations of more stringent interventions on the course of the epidemic through scenario analyses. RESULTS: The best-performing LSTM model showed better prediction accuracy than the best-performing SEAIQRD and ARIMA models in most cases studied. The SEAIQRD model had an absolute advantage in exploring the transmission dynamics of the outbreaks. Regardless of the time to inflection point or the time to Rt curve below 1.0, Shanghai was later than Chengdu (day 46 vs. day 12/day 54 vs. day 14), and Sanya was later than Beihai (day 16 vs. day 12/day 20 vs. day 16). Regardless of the number of peak cases or the cumulative number of infections, Shanghai was higher than Chengdu (34,350 vs. 188/623,870 vs. 2,181), and Sanya was higher than Beihai (1,105 vs. 203/16,289 vs. 3,184). Scenario analyses suggested that upgrading control level in advance, while increasing the index decline rate and quarantine rate, were of great significance for shortening the time to peak and Rt below 1.0, as well as reducing the number of peak cases and final affected population. CONCLUSIONS: The LSTM model has great potential for predicting the prevalence of Omicron outbreaks, whereas the SEAIQRD model is highly effective in revealing their internal transmission mechanisms. We recommended the use of joint interventions to contain the spread of the virus.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , China/epidemiología , Ciudades/epidemiología , Incidencia , SARS-CoV-2
11.
Diagnostics (Basel) ; 13(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-37998615

RESUMEN

The rise in cardiovascular diseases necessitates accurate electrocardiogram (ECG) diagnostics, making high-quality ECG recordings essential. Our CNN-LSTM model, embedded in an open-access GUI and trained on balanced datasets collected in clinical settings, excels in automating ECG quality assessment. When tested across three datasets featuring varying ratios of acceptable to unacceptable ECG signals, it achieved an F1 score ranging from 95.87% to 98.40%. Training the model on real noise sources significantly enhances its applicability in real-life scenarios, compared to simulations. Integrated into a user-friendly toolbox, the model offers practical utility in clinical environments. Furthermore, our study underscores the importance of balanced class representation during training and testing phases. We observed a notable F1 score change from 98.09% to 95.87% when the class ratio shifted from 85:15 to 50:50 in the same testing dataset with equal representation. This finding is crucial for future ECG quality assessment research, highlighting the impact of class distribution on the reliability of model training outcomes.

12.
Curr Diabetes Rev ; 2023 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-37867273

RESUMEN

BACKGROUND: Diabetes is a common and deadly chronic disease caused by high blood glucose levels that can cause heart problems, neurological damage, and other illnesses. Through the early detection of diabetes, patients can live healthier lives. Many machine learning and deep learning techniques have been applied for noninvasive diabetes prediction. The results of some studies have shown that the CNN-LSTM method, a combination of CNN and LSTM, has good performance for predicting diabetes compared to other deep learning methods. METHOD: This paper reviews CNN-LSTM-based studies for diabetes prediction. In the CNNLSTM model, the CNN includes convolution and max pooling layers and is applied for feature extraction. The output of the max-pooling layer was fed into the LSTM layer for classification. DISCUSSION: The CNN-LSTM model performed well in extracting hidden features and correlations between physiological variables. Thus, it can be used to predict diabetes. The CNNLSTM model, like other deep neural network architectures, faces challenges such as training on large datasets and biological factors. Using large datasets can further improve the accuracy of detection. CONCLUSION: The CNN-LSTM model is a promising method for diabetes prediction, and compared with other deep-learning models, it is a reliable method.

13.
Environ Sci Pollut Res Int ; 30(47): 104388-104407, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37702870

RESUMEN

Climate change and human activities have greatly altered the ecological flow of rivers, and the conflict between human water use and natural water demand is becoming more and more prominent. Using two ecological flow indicators (ecodeficit and ecosurplus), this study focuses on assessing the characteristics of ecological flow changes at multiple time scales and introduces the Long Short-Term Memory model to construct a meteorological streamflow model for the Xiangjiang River (XJR) basin, using a separation framework to quantify the effects of human disturbance and climate change on ecological flow at multiple time scales. In addition, the fluvial biodiversity Shannon Index (SI) was used to assess the response processes of riverine ecosystems under changing conditions. The results show that the increase of XJR flow is larger (11%) after 1991, the increase in precipitation and potential evapotranspiration in the basin is 5.60%, and the decrease is 3.09%, respectively, and there are obvious cycles of all three on annual and seasonal scales. The annual ecosurplus increased, and the annual ecodeficit decreased after the hydrological variation; on the seasonal scale, the ecodeficit decreased significantly in summer and autumn, and the ecosurplus increased substantially in winter. Climatic factors were the main drivers of the increased frequency and magnitude of annual, summer, and fall high flows (91%, 94%, and 65% contributions, respectively), while urbanization expansion and reservoir diversions drove the increase in spring ecodeficit. Changes in river flow maintained the ecosurplus at a low level after 2002, further causing a decrease in river biodiversity, and the annual and summer ecosurplus were highly correlated with SI indicators (0.824 and 0.711, respectively). Our study contributes to the development of effective ecological flow regulation policies for the XJR basin.


Asunto(s)
Ecosistema , Ríos , Humanos , Biodiversidad , Cambio Climático , Estaciones del Año , Agua
14.
Chemosphere ; 342: 140153, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37714468

RESUMEN

Modeling-based prediction methods enable rapid, reagent-free air pollution detection based on inexpensive multi-source data than traditional chemical reaction-based detection methods in order to quickly understand the air pollution situation. In this study, a convolutional neural network (CNN) and long and short-term memory (LSTM) neural networks are integrated to create a CNN-LSTM time series prediction model to predict the concentration of PM2.5 and its chemical components (i.e., heavy metals, carbon component, and water-soluble ions) using meteorological data and air pollutants (PM2.5, SO2, NO2, CO, and O3). In the integrated CNN-LSTM model, the CNN uses convolutional and pooling layers to extract features from the data, whereas the powerful nonlinear mapping and learning capabilities of LSTM enable the time series prediction of air pollution. The experimental results showed that the CNN-LSTM exhibited good generalization ability in the prediction of As, Cd, Cr, Cu, Ni, and Zn, with a mean R2 above 0.9. Mean R2 predicted for PM2.5, Pb, Ti, EC, OC, SO42-, and NO3- ranged from 0.85 to 0.9. Shapley value showed that PM2.5, NO2, SO2, and CO had a greater influence on the predicted heavy metal results of the model. Regarding water-soluble ions, the predicted results were dominantly influenced by PM2.5, CO, and humidity. The prediction of the carbon fraction was affected mainly by the PM2.5 concentration. Additionally, several input variables for various components were eliminated without affecting the prediction accuracy of the model, with R2 between 0.70 and 0.84, thereby maximizing modeling efficiency and lowering operational costs. The fully trained model prediction results showed that most predicted components of PM2.5 were lower during January to March 2020 than those in 2018 and 2019. This study provides insight into improving the accuracy of modeling-based detection methods and promotes the development of integrated air pollution monitoring toward a more sustainable direction.

15.
Heliyon ; 9(8): e18506, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37520967

RESUMEN

The impact of the suspended sediment load (SSL) on environmental health, agricultural operations, and water resources planning, is significant. The deposit of SSL restricts the streamflow region, affecting aquatic life migration and finally causing a river course shift. As a result, data on suspended sediments and their fluctuations are essential for a number of authorities especially for water resources decision makers. SSL prediction is often difficult due to a number of issues such as site-specific data, site-specific models, lack of several substantial components to use in prediction, and complexity its pattern. In the past two decades, many machine learning algorithms have shown huge potential for SSL river prediction. However, these models did not provide very reliable results, which led to the conclusion that the accuracy of SSL prediction should be improved. As a result, in order to solve past concerns, this research proposes a Long Short-Term Memory (LSTM) model for SSL prediction. The proposed model was applied for SSL prediction in Johor River located in Malaysia. The study allocated data for suspended sediment load and river flow for period 2010 to 2020. In the current research, four alternative models-Multi-Layer Perceptron (MLP) neural network, Support Vector Regression (SVR), Random Forest (RF), and Long Short-term Memory (LSTM) were investigated to predict the suspended sediment load. The proposed model attained a high correlation value between predicted and actual SSL (0.97), with a minimum RMSE (148.4 ton/day and a minimum MAE (33.43 ton/day). and can thus be generalized for application in similar rivers around the world.

16.
Epidemiol Infect ; 151: e54, 2023 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-37039461

RESUMEN

Hand, foot and mouth disease (HFMD) is a common infection in the world, and its epidemics result in heavy disease burdens. Over the past decade, HFMD has been widespread among children in China, with Shanxi Province being a severely affected northern province. Located in the temperate monsoon climate, Shanxi has a GDP of over 2.5 trillion yuan. It is important to have a comprehensive understanding of the basic features of HFMD in those areas that have similar meteorological and economic backgrounds to northern China. We aimed to investigate epidemiological characteristics, identify spatial clusters and predict monthly incidence of HFMD. All reported HFMD cases were obtained from the Shanxi Center for Disease Control and Prevention. Overall HFMD incidence showed a significant downward trend from 2017 to 2020, increasing again in 2021. Children aged < 5 years were primarily affected, with a high incidence of HFMD in male patients (relative risk: 1.316). The distribution showed a seasonal trend, with major peaks in June and July and secondary peaks in October and November with the exception of 2020. Other enteroviruses were the predominant causative agents of HFMD in most years. Areas with large numbers of HFMD cases were primarily in central Shanxi, and spatial clusters in 2017 and 2018 showed a positive global spatial correlation. Local spatial autocorrelation analysis showed that hot spots and secondary hot spots were concentrated in Jinzhong and Yangquan in 2018. Based on monthly incidence from September 2021 to August 2022, the mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE) of the long short-term memory (LSTM) and seasonal autoregressive integrated moving average (SARIMA) models were 386.58 vs. 838.25, 2.25 vs. 3.08, and 461.96 vs. 963.13, respectively, indicating that the predictive accuracy of LSTM was better than that of SARIMA. The LSTM model may be useful in predicting monthly incidences of HFMD, which may provide early warnings of HFMD epidemics.


Asunto(s)
Enfermedad de Boca, Mano y Pie , Niño , Humanos , Masculino , Incidencia , Riesgo , Análisis Espacial , China/epidemiología
17.
BMC Infect Dis ; 23(1): 71, 2023 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-36747126

RESUMEN

BACKGROUND: Influenza is an acute respiratory infectious disease that is highly infectious and seriously damages human health. Reasonable prediction is of great significance to control the epidemic of influenza. METHODS: Our Influenza data were extracted from Shanxi Provincial Center for Disease Control and Prevention. Seasonal-trend decomposition using Loess (STL) was adopted to analyze the season characteristics of the influenza in Shanxi Province, China, from the 1st week in 2010 to the 52nd week in 2019. To handle the insufficient prediction performance of the seasonal autoregressive integrated moving average (SARIMA) model in predicting the nonlinear parts and the poor accuracy of directly predicting the original sequence, this study established the SARIMA model, the combination model of SARIMA and Long-Short Term Memory neural network (SARIMA-LSTM) and the combination model of SARIMA-LSTM based on Singular spectrum analysis (SSA-SARIMA-LSTM) to make predictions and identify the best model. Additionally, the Mean Squared Error (MSE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) were used to evaluate the performance of the models. RESULTS: The influenza time series in Shanxi Province from the 1st week in 2010 to the 52nd week in 2019 showed a year-by-year decrease with obvious seasonal characteristics. The peak period of the disease mainly concentrated from the end of the year to the beginning of the next year. The best fitting and prediction performance was the SSA-SARIMA-LSTM model. Compared with the SARIMA model, the MSE, MAE and RMSE of the SSA-SARIMA-LSTM model decreased by 38.12, 17.39 and 21.34%, respectively, in fitting performance; the MSE, MAE and RMSE decreased by 42.41, 18.69 and 24.11%, respectively, in prediction performances. Furthermore, compared with the SARIMA-LSTM model, the MSE, MAE and RMSE of the SSA-SARIMA-LSTM model decreased by 28.26, 14.61 and 15.30%, respectively, in fitting performance; the MSE, MAE and RMSE decreased by 36.99, 7.22 and 20.62%, respectively, in prediction performances. CONCLUSIONS: The fitting and prediction performances of the SSA-SARIMA-LSTM model were better than those of the SARIMA and the SARIMA-LSTM models. Generally speaking, we can apply the SSA-SARIMA-LSTM model to the prediction of influenza, and offer a leg-up for public policy.


Asunto(s)
Gripe Humana , Humanos , Gripe Humana/epidemiología , Predicción , Incidencia , Redes Neurales de la Computación , China/epidemiología , Modelos Estadísticos
18.
Phys Med Biol ; 68(2)2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36595253

RESUMEN

Objective.To develop a novel patient-specific cardio-respiratory motion prediction approach for X-ray angiography time series based on a simple long short-term memory (LSTM) model.Approach.The cardio-respiratory motion behavior in an X-ray image sequence was represented as a sequence of 2D affine transformation matrices, which provide the displacement information of contrasted moving objects (arteries and medical devices) in a sequence. The displacement information includes translation, rotation, shearing, and scaling in 2D. A many-to-many LSTM model was developed to predict 2D transformation parameters in matrix form for future frames based on previously generated images. The method was developed with 64 simulated phantom datasets (pediatric and adult patients) using a realistic cardio-respiratory motion simulator (XCAT) and was validated using 10 different patient X-ray angiography sequences.Main results.Using this method we achieved less than 1 mm prediction error for complex cardio-respiratory motion prediction. The following mean prediction error values were recorded over all the simulated sequences: 0.39 mm (for both motions), 0.33 mm (for only cardiac motion), and 0.47 mm (for only respiratory motion). The mean prediction error for the patient dataset was 0.58 mm.Significance.This study paves the road for a patient-specific cardio-respiratory motion prediction model, which might improve navigation guidance during cardiac interventions.


Asunto(s)
Angiografía , Corazón , Humanos , Niño , Rayos X , Corazón/diagnóstico por imagen , Movimiento (Física)
19.
Heliyon ; 8(11): e11670, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36468093

RESUMEN

In this paper, a prediction method based on the KNN-Prophet-LSTM hybrid model is established by using the daily pollutant concentration data of Wuhan from January 1, 2014, to May 3, 2021, and considering the characteristics of time and space. First, the data are divided into trend items, periodic items and error items by the Prophet decomposition method. Considering the advantages of the Prophet and the Long Short-Term Memory (LSTM) models, the trend items and periodic items are predicted by the Prophet model. The LSTM model is used to predict the error terms, and the K-Nearest Neighbor algorithm (KNN) is added to fuse the spatial and temporal information to predict the ozone (O3) concentration value day by day. To highlight the effectiveness and rationality of the KNN-Prophet-LSTM hybrid model, four groups of comparative experiments are set up to compare it with the single model Autoregressive Integrated Moving Average (ARIMA), Prophet, LSTM and the hybrid model Prophet-LSTM. The experimental results show that, (1) the daily maximum 8-hour average concentration of O3 in Wuhan has a significant periodic variation. The difference in the surrounding environment will lead to the difference in O3 concentration change in the region, and the O3 concentration change of similar stations will have a high similarity. (2) The Prophet decomposition algorithm decomposes the original time series, which can effectively extract the time series information and remove noise. Thus, the prediction accuracy is obviously improved. (3) Considering the spatial information of the surrounding sites by KNN algorithm, the accuracy of the model can be further improved. Compared with the baseline model ARIMA, the accuracy is improved by approximately 49.76% on mean absolute error (MAE) and 46.81% on root mean square error (RMSE) respectively. (4) The prediction effect of the mixed model is generally better than that of the single model and possesses a higher prediction accuracy.

20.
PeerJ Comput Sci ; 8: e1148, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36426260

RESUMEN

Correctly predicting the stock price movement direction is of immense importance in the financial market. In recent years, with the expansion of dimension and volume in data, the nonstationary and nonlinear characters in finance data make it difficult to predict stock movement accurately. In this article, we propose a methodology that combines technical analysis and sentiment analysis to construct predictor variables and then apply the improved LASSO-LASSO to forecast stock direction. First, the financial textual content and stock historical transaction data are crawled from websites. Then transfer learning Finbert is used to recognize the emotion of textual data and the TTR package is taken to calculate the technical indicators based on historical price data. To eliminate the multi-collinearity of predictor variables after combination, we improve the long short-term memory neural network (LSTM) model with the Absolute Shrinkage and Selection Operator (LASSO). In predict phase, we apply the variables screened as the input vector to train the LASSO-LSTM model. To evaluate the model performance, we compare the LASSO-LSTM and baseline models on accuracy and robustness metrics. In addition, we introduce the Wilcoxon signed rank test to evaluate the difference in results. The experiment result proves that the LASSO-LSTM with technical and sentiment indicators has an average 8.53% accuracy improvement than standard LSTM. Consequently, this study proves that utilizing historical transactions and financial sentiment data can capture critical information affecting stock movement. Also, effective variable selection can retain the key variables and improve the model prediction performance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA