Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
J Voice ; 2024 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-39244383

RESUMEN

Voice pathologies occur due to various factors, such as malfunction of the vocal cords. Computerized acoustic examination-based vocal pathology detection is crucial for early diagnosis, efficient follow-up, and improving problematic speech. Different acoustic measurements provide it. Executing this process requires expert monitoring and is not preferred by patients because it is time-consuming and costly. This paper is aimed at detecting metaheuristic-based automatic voice pathology. First, feature maps of 10 common diseases, including cordectomy, dysphonia, front lateral partial resection, contact pachyderma, laryngitis, lukoplakia, pure breath, recurrent laryngeal paralysis, vocal fold polyp, and vox senilis, were obtained from the Zero-Crossing Rate, Root-Mean-Square Energy, and Mel-frequency Cepstral Coefficients using a thousand voice signals from the Saarbruecken Voice Database dataset. Hybridizations of different features obtained from the voices of the same diseases using these three methods were used to increase the model's performance. The Grey Wolf Optimizer (MELGWO) algorithm based on local search, evolutionary operator, and concatenated feature maps derived from various approaches was employed to minimize the number of features, implement the models faster, and produce the best result. The fitness values of the metaheuristic algorithms were then determined using supervised machine learning techniques such as Support Vector Machine (SVM) and K-nearest neighbors. The F1 score, sensitivity, specificity, accuracy, and other assessment criteria were compared with the experimental data. The best accuracy result was achieved with 99.50% from the SVM classifier using the feature maps optimized by the improved MELGWO algorithms.

2.
J Electrocardiol ; 86: 153783, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39213712

RESUMEN

Analyzing Electrocardiogram (ECG) signals is imperative for diagnosing cardiovascular diseases. However, evaluating ECG analysis techniques faces challenges due to noise and artifacts in actual signals. Machine learning for automatic diagnosis encounters data acquisition hurdles due to medical data privacy constraints. Addressing these issues, ECG modeling assumes a crucial role in biomedical and parametric spline-based methods have garnered significant attention for their ability to accurately represent the complex temporal dynamics of ECG signals. This study conducts a comparative analysis of two parametric spline-based methods-B-spline and Hermite cubic spline-for ECG modeling, aiming to identify the most effective approach for accurate and reliable ECG representation. The Hermite cubic spline serves as one of the most effective interpolation methods, while B-spline is an approximation method. The comparative analysis includes both qualitative and quantitative evaluations. Qualitative assessment involves visually inspecting the generated spline-based models, comparing their resemblance to the original ECG signals, and employing power spectrum analysis. Quantitative analysis incorporates metrics such as root mean square error (RMSE), Percentage Root Mean Square Difference (PRD) and cross correlation, offering a more objective measure of the model's performance. Preliminary results indicate promising capabilities for both spline-based methods in representing ECG signals. However, the analysis unveils specific strengths and weaknesses for each method. The B-spline method offers greater flexibility and smoothness, while the cubic spline method demonstrates superior waveform capturing abilities with the preservation of control points, a critical aspect in the medical field. Presented research provides valuable insights for researchers and practitioners in selecting the most appropriate method for their specific ECG modeling requirements. Adjustments to control points and parameterization enable the generation of diverse ECG waveforms, enhancing the versatility of this modeling technique. This approach has the potential to extend its utility to other medical signals, presenting a promising avenue for advancing biomedical research.


Asunto(s)
Electrocardiografía , Procesamiento de Señales Asistido por Computador , Electrocardiografía/métodos , Humanos , Algoritmos , Aprendizaje Automático , Reproducibilidad de los Resultados
3.
JMIR Public Health Surveill ; 10: e53719, 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39166439

RESUMEN

Background: The COVID-19 pandemic has revealed significant challenges in disease forecasting and in developing a public health response, emphasizing the need to manage missing data from various sources in making accurate forecasts. Objective: We aimed to show how handling missing data can affect estimates of the COVID-19 incidence rate (CIR) in different pandemic situations. Methods: This study used data from the COVID-19/SARS-CoV-2 surveillance system at the National Institute of Hygiene and Epidemiology, Vietnam. We separated the available data set into 3 distinct periods: zero COVID-19, transition, and new normal. We randomly removed 5% to 30% of data that were missing completely at random, with a break of 5% at each time point in the variable daily caseload of COVID-19. We selected 7 analytical methods to assess the effects of handling missing data and calculated statistical and epidemiological indices to measure the effectiveness of each method. Results: Our study examined missing data imputation performance across 3 study time periods: zero COVID-19 (n=3149), transition (n=1290), and new normal (n=9288). Imputation analyses showed that K-nearest neighbor (KNN) had the lowest mean absolute percentage change (APC) in CIR across the range (5% to 30%) of missing data. For instance, with 15% missing data, KNN resulted in 10.6%, 10.6%, and 9.7% average bias across the zero COVID-19, transition, and new normal periods, compared to 39.9%, 51.9%, and 289.7% with the maximum likelihood method. The autoregressive integrated moving average model showed the greatest mean APC in the mean number of confirmed cases of COVID-19 during each COVID-19 containment cycle (CCC) when we imputed the missing data in the zero COVID-19 period, rising from 226.3% at the 5% missing level to 6955.7% at the 30% missing level. Imputing missing data with median imputation methods had the lowest bias in the average number of confirmed cases in each CCC at all levels of missing data. In detail, in the 20% missing scenario, while median imputation had an average bias of 16.3% for confirmed cases in each CCC, which was lower than the KNN figure, maximum likelihood imputation showed a bias on average of 92.4% for confirmed cases in each CCC, which was the highest figure. During the new normal period in the 25% and 30% missing data scenarios, KNN imputation had average biases for CIR and confirmed cases in each CCC ranging from 21% to 32% for both, while maximum likelihood and moving average imputation showed biases on average above 250% for both CIR and confirmed cases in each CCC. Conclusions: Our study emphasizes the importance of understanding that the specific imputation method used by investigators should be tailored to the specific epidemiological context and data collection environment to ensure reliable estimates of the CIR.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Incidencia , Vietnam/epidemiología , Análisis de Datos , Interpretación Estadística de Datos , Pandemias , Análisis de Datos Secundarios
4.
Int Ophthalmol ; 44(1): 303, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38954051

RESUMEN

PURPOSE: Investigate the most appropriate mathematical formula to objectively express upper eyelid contour symmetry. METHODS: 62 eyes of 31 patients were included in the study. The upper eyelid contour symmetry of the patients was classified subjectively (independent of MRD1) as poor, acceptable, and good by three oculoplastic specialists (senior, expert, and junior surgeon). Bézier curves of the upper lid contour were drawn with ImageJ software (NIH, Bethesda, MA, USA). Using the algorithms created by Author SKC in Spyder (Python 3.7.9.), the symmetry of the Bézier curves of the left eyelids were obtained according to the y-axis, and the mid-pupils of both eyes were superimposed. The lower curve moved vertically to the equal height of the other curve to equalize MRD1's. R2 (Coefficient of determination), RMSE (Root-mean-square error), MSE (Mean squared error), POC (Percentage of co-efficiency), and MAE (Mean absolute error) were calculated. We evaluated the correlation between these objective formulas and the subjective grading of three surgeons using Spearman's rho (ρ). RESULTS: The correlation coefficient of RMSE and MSE were the same for all surgeons grading. There was a strong correlation between the senior surgeon's subjective scoring (N; poor = 8, acceptable = 16, good = 8) and R2, RMSE, POC, MAE (ρ = 0.643, p < 0.001, ρ = -0.607, p < 0.001, ρ = 0.562, p < 0.001, ρ = -0.517, p < 0.001, respectively). We found a strong relationship between the expert surgeon's subjective scoring (N; poor = 9, acceptable = 13, good:10) and R2 (ρ = 0.611, p < 0.001), RMSE (ρ = -0.549, p < 0.001), POC (ρ = 0.511, p < 0.001), and MAE (ρ = -0.450, p < 0.05). We found a strong correlation between junior surgeon's subjective scoring (N; poor = 6, acceptable = 18, good = 8) and R2, RMSE, and POC (ρ: -0.517, p < 0.001; ρ: -0.470, p < 0.001; ρ: 0.521, p < 0.001; respectively) and moderate correlation between MAE (ρ:-0.394, p < 0.05). The highest correlation is observed with R2. CONCLUSIONS: RMSE, MSE, POC, MAE, and especially R2, may quantitatively express upper eyelid contour symmetry, comparable with the oculoplastic surgeon. The highest correlation was observed between the senior surgeon and R2, and decreases with the experience of the surgeon.


Asunto(s)
Párpados , Humanos , Párpados/patología , Femenino , Masculino , Persona de Mediana Edad , Algoritmos , Anciano , Adulto , Blefaroplastia/métodos
5.
ArXiv ; 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39070030

RESUMEN

Petri nets are a promising modeling framework for epidemiology, including the spread of disease across populations or within an individual. In particular, the Susceptible-Infectious-Recovered (SIR) compartment model is foundational for population epidemiological modeling and has been implemented in several prior Petri net studies. However, the SIR model is generally stated as a system of ordinary differential equations (ODEs) with continuous time and variables, while Petri nets are discrete event simulations. To our knowledge, no prior study has investigated the numerical equivalence of Petri net SIR models to the classical ODE formulation. We introduce crucial numerical techniques for implementing SIR models in the GPenSim package for Petri net simulations. We show that these techniques are critical for Petri net SIR models and show a relative root mean squared error of less than 1% compared to ODE simulations for biologically relevant parameter ranges. We conclude that Petri nets provide a valid framework for modeling SIR-type dynamics using biologically relevant parameter values, provided that the other PN structures we outline are also implemented.

6.
Sci Rep ; 14(1): 12626, 2024 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-38824223

RESUMEN

This study aims to develop predictive models for rice yield by applying multivariate techniques. It utilizes stepwise multiple regression, discriminant function analysis and logistic regression techniques to forecast crop yield in specific districts of Haryana. The time series data on rice crop have been divided into two and three classes based on crop yield. The yearly time series data of rice yield from 1980-81 to 2020-21 have been taken from various issues of Statistical Abstracts of Haryana. The study also utilized fortnightly meteorological data sourced from the Agrometeorology Department of CCS HAU, India. For comparing various predictive models' performance, evaluation of measures like Root Mean Square Error, Predicted Error Sum of Squares, Mean Absolute Deviation and Mean Absolute Percentage Error have been used. Results of the study indicated that discriminant function analysis emerged as the most effective to predict the rice yield accurately as compared to logistic regression. Importantly, the research highlighted that the optimum time for forecasting the rice yield is 1 month prior to the crops harvesting, offering valuable insight for agricultural planning and decision-making. This approach demonstrates the fusion of weather data and advanced statistical techniques, showcasing the potential for more precise and informed agricultural practices.


Asunto(s)
Oryza , Oryza/crecimiento & desarrollo , Análisis Multivariante , Modelos Logísticos , India , Productos Agrícolas/crecimiento & desarrollo , Agricultura/métodos , Tiempo (Meteorología) , Conceptos Meteorológicos
7.
Network ; : 1-38, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38717192

RESUMEN

Generally, financial investments are necessary for portfolio management. However, the prediction of a portfolio becomes complicated in several processing techniques which may cause certain issues while predicting the portfolio. Moreover, the error analysis needs to be validated with efficient performance measures. To solve the problems of portfolio optimization, a new portfolio prediction framework is developed. Initially, a dataset is collected from the standard database which is accumulated with various companies' portfolios. For forecasting the benefits of companies, a Multi-serial Cascaded Network (MCNet) is employed which constitutes of Autoencoder, 1D Convolutional Neural Network (1DCNN), and Recurrent Neural Network (RNN) is utilized. The prediction output for the different companies is stored using the developed MCNet model for further use. After predicting the benefits, the best company with the highest profit is selected by Integration of Artificial Rabbit and Hummingbird Algorithm (IARHA). The major contribution of our work is to increase the accuracy of prediction and to choose the optimal portfolio. The implementation is conducted in Python platform. The result analysis shows that the developed model achieves 0.89% and 0.56% regarding RMSE and MAE measures. Throughout the analysis, the experimentation of the developed model shows enriched performance.

8.
Sci Rep ; 14(1): 10467, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714770

RESUMEN

At present, Renewable Energy Sources (RES) utilization keeps on increasing because of their merits are more availability in the atmosphere, easy energy harvesting, less maintenance expenses, plus more reliability. Here, the solar power generation systems are utilized for supplying the energy to the local consumers. The accurate, and efficient solar power supply to the customers is a very important factor to meet the peak load demand. The accurate power generation of the sunlight system completely depends on its accurate parameters extraction. In this work, a Modified Rao-based Dichotomy Technique (MRAODT) is introduced to identify the actual parameters of the different PV cells which are PWP 201 polycrystalline, plus RTC France. The proposed MRAODT method is compared with the other existing algorithms which are the teaching and learning algorithm, African vultures, plus tuna intelligence algorithm. Finally, from the simulation results, the MRAODT gives superior performance when associated with the other controllers in terms of parameters extraction time, accuracy in the PV cells parameters identification, plus convergence time of the algorithm.

9.
Foods ; 13(8)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38672873

RESUMEN

Sorbitol derivatives and other additives are commonly used in various products, such as packaging or food packaging, to improve their mechanical, physical, and optical properties. To accurately and precisely evaluate the efficacy of adding sorbitol-type nucleating agents to these articles, their quantitative determination is essential. This study systematically investigated the quantification of sorbitol-type nucleating agents in food packaging made from impact copolymers of polypropylene (PP) and polyethylene (PE) using attenuated total reflectance infrared spectroscopy (ATR-FTIR) together with analysis of principal components (PCA) and machine learning algorithms. The absorption spectra revealed characteristic bands corresponding to the C-O-C bond and hydroxyl groups attached to the cyclohexane ring of the molecular structure of sorbitol, providing crucial information for identifying and quantifying sorbitol derivatives. PCA analysis showed that with the selected FTIR spectrum range and only the first two components, 99.5% of the variance could be explained. The resulting score plot showed a clear pattern distinguishing different concentrations of the nucleating agent, affirming the predictability of concentrations based on an impact copolymer. The study then employed machine learning algorithms (NN, SVR) to establish prediction models, evaluating their quality using metrics such as RMSE, R2, and RMSECV. Hyperparameter optimization was performed, and SVR showed superior performance, achieving near-perfect predictions (R2 = 0.9999) with an RMSE of 0.100 for both calibration and prediction. The chosen SVR model features two hidden layers with 15 neurons each and uses the Adam algorithm, balanced precision, and computational efficiency. The innovative ATR-FTIR coupled SVR model presented a novel and rapid approach to accurately quantify sorbitol-type nucleating agents in polymer production processes for polymer research and in the analysis of nucleating agent derivatives. The analytical performance of this method surpassed traditional methods (PCR, NN).

10.
J Xray Sci Technol ; 32(3): 839-855, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38393882

RESUMEN

In the medical field, diagnostic tools that make use of deep neural networks have reached a level of performance never before seen. A proper diagnosis of a patient's condition is crucial in modern medicine since it determines whether or not the patient will receive the care they need. Data from a sinus CT scan is uploaded to a computer and displayed on a high-definition monitor to give the surgeon a clear anatomical orientation before endoscopic sinus surgery. In this study, a unique method is presented for detecting and diagnosing paranasal sinus disorders using machine learning. The researchers behind the current study designed their own approach. To speed up diagnosis, one of the primary goals of our study is to create an algorithm that can accurately evaluate the paranasal sinuses in CT scans. The proposed technology makes it feasible to automatically cut down on the number of CT scan images that require investigators to manually search through them all. In addition, the approach offers an automatic segmentation that may be used to locate the paranasal sinus region and crop it accordingly. As a result, the suggested method dramatically reduces the amount of data that is necessary during the training phase. As a result, this results in an increase in the efficiency of the computer while retaining a high degree of performance accuracy. The suggested method not only successfully identifies sinus irregularities but also automatically executes the necessary segmentation without requiring any manual cropping. This eliminates the need for time-consuming and error-prone human labor. When tested with actual CT scans, the method in question was discovered to have an accuracy of 95.16 percent while retaining a sensitivity of 99.14 percent throughout.


Asunto(s)
Artefactos , Aprendizaje Automático , Senos Paranasales , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Senos Paranasales/diagnóstico por imagen , Algoritmos , Enfermedades de los Senos Paranasales/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
11.
Sensors (Basel) ; 23(23)2023 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-38067807

RESUMEN

The literature offers various methods for measuring sound localization. In this study, we aimed to compare these methods to determine their effectiveness in addressing different research questions by examining the effect sizes obtained from each measure. Data from 150 participants who identified the location of a sound source were analyzed to explore the effects of speaker angle, stimuli, HPD type, and condition (with/without HPD) on sound localization, using six methods for analysis: mean absolute deviation (MAD), root-mean-squared error (RMSE), very large errors (VLE), percentage of errors larger than the average error observed in a group of participants (pMean), percentage of errors larger than half the distance between two consecutive loudspeakers (pHalf), and mirror image reversal errors (MIRE). Results indicated that the MIRE measure was the most sensitive to the effects of speaker angle and HPD type, while the VLE measure was most sensitive to the effect of stimuli type. The condition variable provided the largest effect sizes, with no difference observed between measures. The data suggest that when effect sizes are substantial, all methods are adequate. However, for cases where the effect size is expected to be small, methods that yield larger effect sizes should be considered, considering their alignment with the research question.

12.
Sensors (Basel) ; 23(21)2023 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-37960637

RESUMEN

In this paper, we propose a novel simultaneous Correlative Interferometer (CI) technique that elaborately estimates the Direction of Arrival (DOA) of multiple source signals incident on an antenna array. The basic idea of the proposed technique is that the antenna-array-based receiver compares the phase of the received signal with one of the candidates at each time sample and jointly exploits these multiple time samples to estimate the DOAs of multiple signal sources. The proposed simultaneous CI-based DOA estimation technique collectively utilizes multiple time-domain samples and can be regarded as a generalized version of the conventional CI algorithm for the case of multiple time-domain samples. We first thoroughly review the conventional CI algorithm to comprehensively explain the procedure of the direction-finding algorithm that adopts the phase information of received signals. We also discuss several technical issues of conventional CI-based DOA estimation techniques that are originally proposed for the case of a single time-domain sample. Then, we propose a simultaneous CI-based DOA estimation technique with multi-sample diversity as a novel solution for the case of multiple time-domain samples. We clearly compare the proposed simultaneous CI technique with the conventional CI technique and we compare the existing Multiple Signal Classification (MUSIC)-based DOA estimation technique with the conventional CI-based technique by using the DOA spectrum as well. To the best of our knowledge, the simultaneous CI-based DOA estimation technique that effectively utilizes the characteristics of multiple signal sources over multiple time-domain samples has not been reported in the literature. Through extensive computer simulations, we show that the proposed simultaneous CI technique significantly outperforms both the conventional CI technique in terms of DOA estimation even in harsh environments and with various antenna array structures. It is worth noting that the proposed simultaneous CI technique results in much better performance than the classical MUSIC algorithm, which is one of the most representative subspace-based DOA estimation techniques.

13.
J Biopharm Stat ; : 1-20, 2023 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-37823377

RESUMEN

There are good reasons to perform a randomized controlled trial (RCT) even in early phases of clinical development. However, the low sample sizes in those settings lead to high variability of the treatment effect estimate. The variability could be reduced by adding external control data if available. For the common setting of suitable subject-level control group data only available from one external (clinical trial or real-world) data source, we evaluate different analysis options for estimating the treatment effect via hazard ratios. The impact of the external control data is usually guided by the level of similarity with the current RCT data. Such level of similarity can be determined via outcome and/or baseline covariate data comparisons. We provide an overview over existing methods, propose a novel option for a combined assessment of outcome and baseline data, and compare a selected set of approaches in a simulation study under varying assumptions regarding observable and unobservable confounder distributions using a time-to-event model. Our various simulation scenarios also reflect the differences between external clinical trial and real-world data. Data combinations via simple outcome-based borrowing or simple propensity score weighting with baseline covariate data are not recommended. Analysis options which conflate outcome and baseline covariate data perform best in our simulation study.

14.
Entropy (Basel) ; 25(9)2023 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-37761659

RESUMEN

Matrix factorization is a long-established method employed for analyzing and extracting valuable insight recommendations from complex networks containing user ratings. The execution time and computational resources demanded by these algorithms pose limitations when confronted with large datasets. Community detection algorithms play a crucial role in identifying groups and communities within intricate networks. To overcome the challenge of extensive computing resources with matrix factorization techniques, we present a novel framework that utilizes the inherent community information of the rating network. Our proposed approach, named Community-Based Matrix Factorization (CBMF), has the following steps: (1) Model the rating network as a complex bipartite network. (2) Divide the network into communities. (3) Extract the rating matrices pertaining only to those communities and apply MF on these matrices in parallel. (4) Merge the predicted rating matrices belonging to communities and evaluate the root mean square error (RMSE). In our experimentation, we use basic MF, SVD++, and FANMF for matrix factorization, and the Louvain algorithm is used for community division. The experimental evaluation on six datasets shows that the proposed CBMF enhances the quality of recommendations in each case. In the MovieLens 100K dataset, RMSE has been reduced to 0.21 from 1.26 using SVD++ by dividing the network into 25 communities. A similar reduction in RMSE is observed for the datasets of FilmTrust, Jester, Wikilens, Good Books, and Cell Phone.

15.
Int J Biostat ; 2023 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-37159838

RESUMEN

In case-control studies, odds ratios (OR) are calculated from 2 × 2 tables and in some instances, we observe small cell counts or zero counts in one of the cells. The corrections to calculate the ORs in the presence of empty cells are available in literature. Some of these include Yates continuity correction and Agresti and Coull correction. However, the available methods provided different corrections and the situations where each could be applied are not very apparent. Therefore, the current research proposes an iterative algorithm of estimating an exact (optimum) correction factor for the respective sample size. This was evaluated by simulating data with varying proportions and sample sizes. The estimated correction factor was considered after obtaining the bias, standard error of odds ratio, root mean square error and the coverage probability. Also, we have presented a linear function to identify the exact correction factor using sample size and proportion.

16.
Sensors (Basel) ; 23(8)2023 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-37112219

RESUMEN

Improving the accuracy of DEMs is a critical goal in digital terrain analysis. The combination of multi-source data can be used to increase DEM accuracy. Five typical geomorphic study areas in the Loess Plateau in Shaanxi were selected for a case study and a 5 m DEM unit was used as the basic data input. Data from three open-source databases of DEM images, the ALOS, SRTM and ASTER, were obtained and processed uniformly through a previously geographical registration process. Three methods, Gram-Schmidt pan sharpening (GS), weighted fusion and feature-point-embedding fusion, were used for mutual enhancement of the three kinds of data. We combined the effect of these three fusion methods in the five sample areas and compared the eigenvalues taken before and after the fusion. The main conclusions are as follows: (1) The GS fusion method is convenient and simple, and the three combined fusion methods can be improved. Generally speaking, the fusion of ALOS and SRTM data led to the best performance, but was greatly affected by the original data. (2) By embedding feature points into three publicly available types of DEM data, the errors and extreme error value of the data obtained through fusion were significantly improved. Overall, ALOS fusion resulted in the best performance because it had the best raw data quality. The original eigenvalues of the ASTER were all inferior and the improvement in the error and the error extreme value after fusion was evident. (3) By dividing the sample area into different areas and fusing them separately according to the weights of each area, the accuracy of the data obtained was significantly improved. In comparing the improvement in accuracy in each region, it was observed that the fusion of ALOS and SRTM data relies on a gentle area. A high accuracy of these two data will lead to a better fusion. Merging ALOS and ASTER data led to the greatest increase in accuracy, especially in the areas with a steep slope. Additionally, when SRTM and ASTER data were merged, the observed improvement was relatively stable with little difference.

17.
Lancet Reg Health West Pac ; 31: 100637, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36879780

RESUMEN

Background: We aimed to estimate the future burden of coronary heart disease (CHD) and stroke mortalities by sex and all 47 prefectures of Japan until 2040 while accounting for effects of age, period, and cohort and integrating them to be at the national level to account for regional differences among prefectures. Methods: We estimated future CHD and stroke mortality projections, developing Bayesian age-period-cohort (BAPC) models in population and the number of CHD and stroke by age, sex, and all 47 prefectures observed from 1995 to 2019; then applying these to official future population estimates until 2040. The present participants were all men and women aged over 30 years and were residents of Japan. Findings: In the BAPC models, the predicted number of national-level cardiovascular deaths from 2020 to 2040 would decrease (39,600 [95% credible interval: 32,200-47,900] to 36,200 [21,500-58,900] CHD deaths in men, and 27,400 [22,000-34,000] to 23,600 [12,700-43,800] in women; and 50,400 [41,900-60,200] to 40,800 [25,200-67,800] stroke deaths in men, and 52,200 [43,100-62,800] to 47,400 [26,800-87,200] in women). Interpretation: After adjusting these factors, future CHD and stroke deaths will decline until 2040 at the national level and in most prefectures. Funding: This research was supported by the Intramural Research Fund of Cardiovascular Diseases of the National Cerebral and Cardiovascular Center (21-1-6, 21-6-8), JSPS KAKENHI Grant Number JP22K17821, and the Ministry of Health, Labour and Welfare Comprehensive Research on Life-Style Related (Diseases Cardiovascular Diseases and Diabetes Mellitus Program), Grant Number 22FA1015.

18.
J Mech Behav Biomed Mater ; 140: 105688, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36753847

RESUMEN

OBJECTIVES: To measure and compare the accuracy of 3D-printed materials used for RPD production to improve workflow and eliminate errors in manufacturing. METHODS: A partially edentulous maxilla (Kennedy Class III, modification 1) was prepared and designed with proximal plates, rest seats and clasps in one first premolar, one canine and two second molars. A total of 540 3D printed RPD frameworks were 3D printed with three different types of resin (DentaCAST (Asiga, Australia), SuperCAST (Asiga, Australia) and NextDent (3D Systems, Netherlands)). To evaluate the trueness of the printing materials, they were printed with three types of layer thickness: 50 µm, 75 µm and 100 µm, using two types of build angles: 0° and 45° and three types of plate locations: side, middle and corner. After production, all specimens were scanned and superimposed with a control sample that was digitally designed. Using the initial alignment and best-fit alignment method, the root mean square error (RMSE) was calculated. To capture region specific discrepancy, 10 points of XYZ internal discrepancy within RPDs were measured and Euclidean error was calculated. Data was statistically analysed using Shapiro-Wilk and Kruskal-Wallis tests, one-way ANOVA and T-test (SPSS Version 29) and MATLAB (R2022b). RESULTS: Optimal results were found using 45°, middle of the build plate and layer thicknesses of 100 µm (115 ± 19 µm, DentaCAST), 75 µm (143 ± 14 µm, NextDent), 50 µm (98 ± 35 µm, SuperCAST), which were clinically acceptable. Results were statistically significant when comparing layer thickness in each testing group (p < 0.001). Layer thickness was a primary parameter in the determination of print accuracy among all materials (p < 0.001). Higher discrepancies and failures were observed in 0° prints. No statistically significant difference was found in material usage between build angles or layer thickness (p > 0.005). CONCLUSIONS: All three 3D printing materials exhibited clinically acceptable RMSE results with a build angle of 45° with a printing layer thickness of 50 µm for SuperCAST, 75 µm NextDent and 100 µm for DentaCAST. The highest discrepancies were mostly found in posterior clasps, while the lowest discrepancy was found in palatal straps. Despite unoptimized spacing of prints, frameworks configured to print in the middle of the build plate result in the least printing failures.


Asunto(s)
Diseño Asistido por Computadora , Dentadura Parcial , Impresión Tridimensional , Análisis de Varianza , Placas Óseas
19.
Biol Methods Protoc ; 8(1): bpac035, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36741926

RESUMEN

With the rapid spread of COVID-19, there is an urgent need for a framework to accurately predict COVID-19 transmission. Recent epidemiological studies have found that a prominent feature of COVID-19 is its ability to be transmitted before symptoms occur, which is generally not the case for seasonal influenza and severe acute respiratory syndrome. Several COVID-19 predictive epidemiological models have been proposed; however, they share a common drawback - they are unable to capture the unique asymptomatic nature of COVID-19 transmission. Here, we propose vector autoregression (VAR) as an epidemiological county-level prediction model that captures this unique aspect of COVID-19 transmission by introducing newly infected cases in other counties as lagged explanatory variables. Using the number of new COVID-19 cases in seven New York State counties, we predicted new COVID-19 cases in the counties over the next 4 weeks. We then compared our prediction results with those of 11 other state-of-the-art prediction models proposed by leading research institutes and academic groups. The results showed that VAR prediction is superior to other epidemiological prediction models in terms of the root mean square error of prediction. Thus, we strongly recommend the simple VAR model as a framework to accurately predict COVID-19 transmission.

20.
Int J Pharm X ; 5: 100150, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36593987

RESUMEN

Inkjet printing has the potential to advance the treatment of eye diseases by printing drugs on demand onto contact lenses for localised delivery and personalised dosing, while near-infrared (NIR) spectroscopy can further be used as a quality control method for quantifying the drug but has yet to be demonstrated with contact lenses. In this study, a glaucoma therapy drug, timolol maleate, was successfully printed onto contact lenses using a modified commercial inkjet printer. The drug-loaded ink prepared for the printer was designed to match the properties of commercial ink, whilst having maximal drug loading and avoiding ocular inflammation. This setup demonstrated personalised drug dosing by printing multiple passes. Light transmittance was found to be unaffected by drug loading on the contact lens. A novel dissolution model was built, and in vitro dissolution studies showed drug release over at least 3 h, significantly longer than eye drops. NIR was used as an external validation method to accurately quantify the drug dose. Overall, the combination of inkjet printing and NIR represent a novel method for point-of-care personalisation and quantification of drug-loaded contact lenses.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA