Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Risk Anal ; 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39166706

RESUMEN

As urbanization continues to accelerate worldwide, urban flooding is becoming increasingly destructive, making it important to improve emergency scheduling capabilities. Compared to other scheduling problems, the urban flood emergency rescue scheduling problem is more complicated. Considering the impact of a disaster on the road network passability, a single type of vehicle cannot complete all rescue tasks. A reasonable combination of multiple vehicle types for cooperative rescue can improve the efficiency of rescue tasks. This study focuses on the urban flood emergency rescue scheduling problem considering the actual road network inundation situation. First, the progress and shortcomings of related research are analyzed. Then, a four-level emergency transportation network based on the collaborative water-ground multimodal transport transshipment mode is established. It is shown that the transshipment points have random locations and quantities according to the actual inundation situation. Subsequently, an interactive model based on hierarchical optimization is constructed considering the travel length, travel time, and waiting time as hierarchical optimization objectives. Next, an improved A* algorithm based on the quantity of specific extension nodes is proposed, and a scheduling scheme decision-making algorithm is proposed based on the improved A* and greedy algorithms. Finally, the proposed decision-making algorithm is applied in a practical example for solving and comparative analysis, and the results show that the improved A* algorithm is faster and more accurate. The results also verify the effectiveness of the scheduling model and decision-making algorithm. Finally, a scheduling scheme with the shortest travel time for the proposed emergency scheduling problem is obtained.

2.
Math Med Biol ; 41(3): 157-168, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-38978123

RESUMEN

Experimental and theoretical properties of amino acids as building blocks of peptides and proteins have been extensively researched. Each such method assigns a number to each amino acid, and one such assignment is called amino-acid scale. Their usage in bioinformatics to explain and predict behaviour of peptides and proteins is of essential value. The number of such scales is very large. There are more than a hundred scales related just to hydrophobicity. A large number of scales can be a computational burden for algorithms that try to define peptide descriptors combining several of these scales. Hence, it is of interest to construct a smaller, but still representative set of scales. Here, we present software that does this. We test it on the set of scales using a database constructed by Kawashima and collaborators and show that it is possible to significantly reduce the number of scales observed without losing much of the information. An algorithm is implemented in C#. As a result, we provide a smaller database that might be a very useful tool for the analyses and construction of new peptides. Another interesting application of this database would be to compare the artificial intelligence construction of peptides having as an input the complete Kawashima database and this reduced one. Obtaining in both cases similar results would give much credibility to the constructs of such AI algorithms.


Asunto(s)
Algoritmos , Aminoácidos , Biología Computacional , Programas Informáticos , Péptidos , Bases de Datos de Proteínas , Proteínas/química , Interacciones Hidrofóbicas e Hidrofílicas
3.
Sensors (Basel) ; 24(9)2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38733003

RESUMEN

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

4.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38544141

RESUMEN

The last-mile logistics in cities have become an indispensable part of the urban logistics system. This study aims to explore the effective selection of last-mile logistics nodes to enhance the efficiency of logistics distribution, strengthen the image of corporate distribution, further reduce corporate operating costs, and alleviate urban traffic congestion. This paper proposes a clustering-based approach to identify urban logistics nodes from the perspective of geographic information fusion. This method comprehensively considers several key indicators, including the coverage, balance, and urban traffic conditions of logistics distribution. Additionally, we employed a greedy algorithm to identify secondary nodes around primary nodes, thus constructing an effective nodal network. To verify the practicality of this model, we conducted an empirical simulation study using the logistics demand and traffic conditions in the Xianlin District of Nanjing. This research not only identifies the locations of primary and secondary logistics nodes but also provides a new perspective for constructing urban last-mile logistics systems, enriching the academic research related to the construction of logistics nodes. The results of this study are of significant theoretical and practical importance for optimizing urban logistics networks, enhancing logistics efficiency, and promoting the improvement of urban traffic conditions.

5.
Stat Med ; 43(9): 1726-1742, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38381059

RESUMEN

Current status data are a type of failure time data that arise when the failure time of study subject cannot be determined precisely but is known only to occur before or after a random monitoring time. Variable selection methods for the failure time data have been discussed extensively in the literature. However, the statistical inference of the model selected based on the variable selection method ignores the uncertainty caused by model selection. To enhance the prediction accuracy for risk quantities such as survival probability, we propose two optimal model averaging methods under semiparametric additive hazards models. Specifically, based on martingale residuals processes, a delete-one cross-validation (CV) process is defined, and two new CV functional criteria are derived for choosing model weights. Furthermore, we present a greedy algorithm for the implementation of the techniques, and the asymptotic optimality of the proposed model averaging approaches is established, along with the convergence of the greedy averaging algorithms. A series of simulation experiments demonstrate the effectiveness and superiority of the proposed methods. Finally, a real-data example is provided as an illustration.


Asunto(s)
Algoritmos , Modelos Estadísticos , Humanos , Modelos de Riesgos Proporcionales , Simulación por Computador , Probabilidad
6.
Heliyon ; 9(9): e20133, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37809602

RESUMEN

Gene Selection (GS) is a strategy method targeted at reducing redundancy, limited expressiveness, and low informativeness in gene expression datasets obtained by DNA Microarray technology. These datasets contain a plethora of diverse and high-dimensional samples and genes, with a significant discrepancy in the number of samples and genes present. The complexities of GS are especially noticeable in the context of microarray expression data analysis, owing to the inherent data imbalance. The main goal of this study is to offer a simplified and computationally effective approach to dealing with the conundrum of attribute selection in microarray gene expression data. We use the Black Widow Optimization algorithm (BWO) in the context of GS to achieve this, using two unique methodologies: the unaltered BWO variation and the hybridized BWO variant combined with the Iterated Greedy algorithm (BWO-IG). By improving the local search capabilities of BWO, this hybridization attempts to promote more efficient gene selection. A series of tests was carried out using nine benchmark datasets that were obtained from the gene expression data repository in the pursuit of empirical validation. The results of these tests conclusively show that the BWO-IG technique performs better than the traditional BWO algorithm. Notably, the hybridized BWO-IG technique excels in the efficiency of local searches, making it easier to identify relevant genes and producing findings with higher levels of reliability in terms of accuracy and the degree of gene pruning. Additionally, a comparison analysis is done against five modern wrapper Feature Selection (FS) methodologies, namely BIMFOHHO, BMFO, BHHO, BCS, and BBA, in order to put the suggested BWO-IG method's effectiveness into context. The comparison that follows highlights BWO-IG's obvious superiority in reducing the number of selected genes while also obtaining remarkably high classification accuracy. The key findings were an average classification accuracy of 94.426, average fitness values of 0.061, and an average number of selected genes of 2933.767.

7.
Front Physiol ; 14: 1264690, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37745249

RESUMEN

Introduction: The inverse problem of electrocardiography noninvasively localizes the origin of undesired cardiac activity, such as a premature ventricular contraction (PVC), from potential recordings from multiple torso electrodes. However, the optimal number and placement of electrodes for an accurate solution of the inverse problem remain undetermined. This study presents a two-step inverse solution for a single dipole cardiac source, which investigates the significance of the torso electrodes on a patient-specific level. Furthermore, the impact of the significant electrodes on the accuracy of the inverse solution is studied. Methods: Body surface potential recordings from 128 electrodes of 13 patients with PVCs and their corresponding homogeneous and inhomogeneous torso models were used. The inverse problem using a single dipole was solved in two steps: First, using information from all electrodes, and second, using a subset of electrodes sorted in descending order according to their significance estimated by a greedy algorithm. The significance of electrodes was computed for three criteria derived from the singular values of the transfer matrix that correspond to the inversely estimated origin of the PVC computed in the first step. The localization error (LE) was computed as the Euclidean distance between the ground truth and the inversely estimated origin of the PVC. The LE obtained using the 32 and 64 most significant electrodes was compared to the LE obtained when all 128 electrodes were used for the inverse solution. Results: The average LE calculated for both torso models and using all 128 electrodes was 28.8 ± 11.9 mm. For the three tested criteria, the average LEs were 32.6 ± 19.9 mm, 29.6 ± 14.7 mm, and 28.8 ± 14.5 mm when 32 electrodes were used. When 64 electrodes were used, the average LEs were 30.1 ± 16.8 mm, 29.4 ± 12.0 mm, and 29.5 ± 12.6 mm. Conclusion: The study found inter-patient variability in the significance of torso electrodes and demonstrated that an accurate localization by the inverse solution with a single dipole could be achieved using a carefully selected reduced number of electrodes.

8.
J Comb Optim ; 45(5): 117, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37304048

RESUMEN

Thanks to the mass adoption of internet and mobile devices, users of the social media can seamlessly and spontaneously connect with their friends, followers and followees. Consequently, social media networks have gradually become the major venue for broadcasting and relaying information, and is casting great influences on the people in many aspects of their daily lives. Thus locating those influential users in social media has become crucially important for the successes of many viral marketing, cyber security, politics, and safety-related applications. In this study, we address the problem through solving the tiered influence and activation thresholds target set selection problem, which is to find the seed nodes that can influence the most users within a limited time frame. Both the minimum influential seeds and maximum influence within budget problems are considered in this study. Besides, this study proposes several models exploiting different requirements on seed nodes selection, such as maximum activation, early activation and dynamic threshold. These time-indexed integer program models suffer from the computational difficulties due to the large numbers of binary variables to model influence actions at each time epoch. To address this challenge, this paper designs and leverages several efficient algorithms, i.e., Graph Partition, Nodes Selection, Greedy algorithm, recursive threshold back algorithm and two-stage approach in time, especially for large-scale networks. Computational results show that it is beneficial to apply either the breadth first search or depth first search greedy algorithms for the large instances. In addition, algorithms based on node selection methods perform better in the long-tailed networks.

9.
SLAS Technol ; 28(4): 264-277, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36997066

RESUMEN

During laboratory automation of life science experiments, coordinating specialized instruments and human experimenters for various experimental procedures is important to minimize the execution time. In particular, the scheduling of life science experiments requires the consideration of time constraints by mutual boundaries (TCMB) and can be formulated as the "scheduling for laboratory automation in biology" (S-LAB) problem. However, existing scheduling methods for the S-LAB problems have difficulties in obtaining a feasible solution for large-size scheduling problems at a time sufficient for real-time use. In this study, we proposed a fast schedule-finding method for S-LAB problems, SAGAS (Simulated annealing and greedy algorithm scheduler). SAGAS combines simulated annealing and the greedy algorithm to find a scheduling solution with the shortest possible execution time. We have performed scheduling on real experimental protocols and shown that SAGAS can search for feasible or optimal solutions in practicable computation time for various S-LAB problems. Furthermore, the reduced computation time by SAGAS enables us to systematically search for laboratory automation with minimum execution time by simulating scheduling for various laboratory configurations. This study provides a convenient scheduling method for life science automation laboratories and presents a new possibility for designing laboratory configurations.


Asunto(s)
Algoritmos , Automatización de Laboratorios , Humanos , Laboratorios
10.
Sensors (Basel) ; 23(3)2023 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-36772527

RESUMEN

In the Information Age, the widespread usage of blackbox algorithms makes it difficult to understand how data is used. The practice of sensor fusion to achieve results is widespread, as there are many tools to further improve the robustness and performance of a model. In this study, we demonstrate the utilization of a Long Short-Term Memory (LSTM-CCA) model for the fusion of Passive RF (P-RF) and Electro-Optical (EO) data in order to gain insights into how P-RF data are utilized. The P-RF data are constructed from the in-phase and quadrature component (I/Q) data processed via histograms, and are combined with enhanced EO data via dense optical flow (DOF). The preprocessed data are then used as training data with an LSTM-CCA model in order to achieve object detection and tracking. In order to determine the impact of the different data inputs, a greedy algorithm (explainX.ai) is implemented to determine the weight and impact of the canonical variates provided to the fusion model on a scenario-by-scenario basis. This research introduces an explainable LSTM-CCA framework for P-RF and EO sensor fusion, providing novel insights into the sensor fusion process that can assist in the detection and differentiation of targets and help decision-makers to determine the weights for each input.

11.
Infect Dis Model ; 8(1): 192-202, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36688089

RESUMEN

Background: The current outbreak of novel coronavirus disease 2019 has caused a serious disease burden worldwide. Vaccines are an important factor to sustain the epidemic. Although with a relatively high-vaccination worldwide, the decay of vaccine efficacy and the arising of new variants lead us to the challenge of maintaining a sufficient immune barrier to protect the population. Method: A case-contact tracking data in Hunan, China, is used to estimate the contact pattern of cases for scenarios including school, workspace, etc, rather than ordinary susceptible population. Based on the estimated vaccine coverage and efficacy, a multi-group vaccinated-exposed-presymptomatic-symptomatic-asymptomatic-removed model (VEFIAR) with 8 age groups, with each partitioned into 4 vaccination status groups is developed. The optimal dose-wise vaccinating strategy is optimized based on the currently estimated immunity barrier of coverage and efficacy, using the greedy algorithm that minimizes the cumulative cases, population size of hospitalization and fatality respectively in a certain future interval. Parameters of Delta and Omicron variants are used respectively in the optimization. Results: The estimated contact matrices of cases showed a concentration on middle ages, and has compatible magnitudes compared to estimations from contact surveys in other studies. The VEFIAR model is numerically stable. The optimal controled vaccination strategy requires immediate vaccination on the un-vaccinated high-contact population of age 30-39 to reduce the cumulative cases, and is stable with different basic reproduction numbers ( R 0 ). As for minimizing hospitalization and fatality, the optimized strategy requires vaccination on the un-vaccinated of both aged 30-39 of high contact frequency and the vulnerable older. Conclusion: The objective of reducing transmission requires vaccination in age groups of the highest contact frequency, with more priority for un-vaccinated than un-fully or fully vaccinated. The objective of reducing total hospitalization and fatality requires not only to reduce transmission but also to protect the vulnerable older. The priority changes by vaccination progress. For any region, if the local contact pattern is available, then with the vaccination coverage, efficacy, and disease characteristics of relative risks in heterogeneous populations, the optimal dose-wise vaccinating process will be obtained and gives hints for decision-making.

12.
Entropy (Basel) ; 24(12)2022 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-36554158

RESUMEN

In this study, the performance of intelligent reflecting surfaces (IRSs) with a discrete phase shift strategy is examined in multiple-antenna systems. Considering the IRS network overhead, the achievable rate model is newly designed to evaluate the practical IRS system performance. Finding the optimal resolution of the IRS discrete phase shifts and a corresponding phase shift vector is an NP-hard combinatorial problem with an extremely large search complexity. Recognizing the performance trade-off between the IRS passive beamforming gain and IRS signaling overheads, the incremental search method is proposed to present the optimal resolution of the IRS discrete phase shift. Moreover, two low-complexity sub-algorithms are suggested to obtain the IRS discrete phase shift vector during the incremental search algorithms. The proposed incremental search-based discrete phase shift method can efficiently obtain the optimal resolution of the IRS discrete phase shift that maximizes the overhead-aware achievable rate. Simulation results show that the discrete phase shift with the incremental search method outperforms the conventional analog phase shift by choosing the optimal resolution of the IRS discrete phase shift. Furthermore, the cumulative distribution function comparison shows the superiority of the proposed method over the entire coverage area. Specifically, it is shown that more than 20% of coverage extension can be accomplished by deploying IRS with the proposed method.

13.
Front Plant Sci ; 13: 1042035, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36483963

RESUMEN

Herein, a combined multipoint picking scheme was proposed, and the sizes of the end of the bud picker were selectively designed. Firstly, the end of the bud picker was abstracted as a fixed-size picking box, and it was assumed that the tea buds in the picking box have a certain probability of being picked. Then, the picking box coverage and the greedy algorithm were designed to make as few numbers of picking box set as possible to cover all buds to reduce the numbers of picking. Furthermore, the Graham algorithm and the minimum bounding box were applied to fine-tune the footholds of each picking box in the optimal coverage picking box set, so that the buds were concentrated in the middle of the picking boxes as much as possible. Moreover, the geometric center of each picking box was taken as a picking point, and the ant colony algorithm was used to optimize the picking path of the end of the bud picker. Finally, by analyzing the influence of several parameters on the picking performance of the end of the bud picker, the optimal sizes of the picking box were calculated successfully under different conditions. The experimental results showed that the average picking numbers of the combined multipoint picking scheme were reduced by 31.44%, the shortest picking path was decreased by 11.10%, and the average consumed time was reduced by 50.92% compared to the single-point picking scheme. We believe that the proposed scheme can provide key technical support for the subsequent design of intelligent bud-picking robots.

14.
PeerJ Comput Sci ; 8: e1118, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36426244

RESUMEN

Mobile edge computational power faces the difficulty of balancing the energy consumption of many devices and workloads as science and technology advance. Most related research focuses on exploiting edge server computing performance to reduce mobile device energy consumption and task execution time during task processing. Existing research, however, shows that there is no adequate answer to the energy consumption balances between multi-device and multitasking. The present edge computing system model has been updated to address this energy consumption balance problem. We present a blockchain-based analytical method for the energy utilization balance optimization problem of multi-mobile devices and multitasking and an optimistic scenario on this foundation. An investigation of the corresponding approximation ratio is performed. Compared to the total energy demand optimization method and the random algorithm, many simulation studies have been carried out. Compared to the random process, the testing findings demonstrate that the suggested greedy algorithm can improve average performance by 66.59 percent in terms of energy balance. Furthermore, when the minimum transmission power of the mobile device is between five and six dBm, the greedy algorithm nearly achieves the best solution when compared to the brute force technique under the classical task topology.

15.
Micromachines (Basel) ; 13(11)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36363907

RESUMEN

Constrained random stimulus generation is no longer sufficient to fully simulate the functionality of a digital design. The increasing complexity of today's hardware devices must be supported by powerful development and simulation environments, powerful computational mechanisms, and appropriate software to exploit them. Reinforcement learning, a powerful technique belonging to the field of artificial intelligence, provides the means to efficiently exploit computational resources to find even the least obvious correlations between configuration parameters, stimuli applied to digital design inputs, and their functional states. This paper, in which a novel software system is used to simplify the analysis of simulation outputs and the generation of input stimuli through reinforcement learning methods, provides important details about the setup of the proposed method to automate the verification process. By understanding how to properly configure a reinforcement algorithm to fit the specifics of a digital design, verification engineers can more quickly adopt this automated and efficient stimulus generation method (compared with classical verification) to bring the digital design to a desired functional state. The results obtained are most promising, with even 52 times fewer steps needed to reach a target state using reinforcement learning than when constrained random stimulus generation was used.

16.
PeerJ Comput Sci ; 8: e1103, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36262160

RESUMEN

The extractive text summarization (ETS) method for finding the salient information from a text automatically uses the exact sentences from the source text. In this article, we answer the question of what quality of a summary we can achieve with ETS methods? To maximize the ROUGE-1 score, we used five approaches: (1) adapted reduced variable neighborhood search (RVNS), (2) Greedy algorithm, (3) VNS initialized by Greedy algorithm results, (4) genetic algorithm, and (5) genetic algorithm initialized by the Greedy algorithm results. Furthermore, we ran experiments on articles from the arXive dataset. As a result, we found 0.59 and 0.25 scores for ROUGE-1 and ROUGE-2, respectively achievable by the approach, where the genetic algorithm initialized by the Greedy algorithm results, which happens to yield the best results out of the tested approaches. Moreover, those scores appear to be higher than scores obtained by the current state-of-the-art text summarization models: the best score in the literature for ROUGE-1 on the same data set is 0.46. Therefore, we have room for the development of ETS methods, which are now undeservedly forgotten.

17.
Phys Eng Sci Med ; 45(3): 867-882, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35849323

RESUMEN

Dynamic causal modeling (DCM) is a tool used for effective connectivity (EC) estimation in neuroimage analysis. But it is a model-driven analysis method, and the structure of the EC network needs to be determined in advance based on a large amount of prior knowledge. This characteristic makes it difficult to apply DCM to the exploratory brain network analysis. The exploratory analysis of DCM can be realized from two perspectives: one is to reduce the computational cost of the model; the other is to reduce the model space. From the perspective of model space reduction, a model space exploration strategy is proposed, including two algorithms. One algorithm, named GreedyEC, starts with reducing EC from full model, and the other, named GreedyROI, start with adding EC from one node model. Then the two algorithms were applied to the task state functional magnetic resonance imaging (fMRI) data of visual object recognition and selected the best DCM model from the perspective of model comparison based on Bayesian model compare method. Results show that combining the results of the two algorithms can further improve the effect of DCM exploratory analysis. For convenience in application, the algorithms were encapsulated into MATLAB function based on SPM to help neuroscience researchers to analyze the brain causal information flow network. The strategy provides a model space exploration tool that may obtain the best model from the perspective of model comparison and lower the threshold of DCM analysis.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Teorema de Bayes , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Modelos Neurológicos
18.
Sensors (Basel) ; 22(6)2022 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-35336578

RESUMEN

In on-grid microgrids, electric vehicles (EVs) have to be efficiently scheduled for cost-effective electricity consumption and network operation. The stochastic nature of the involved parameters along with their large number and correlations make such scheduling a challenging task. This paper aims at identifying pertinent innovative solutions for reducing the relevant total costs of the on-grid EVs within hybrid microgrids. To optimally scale the EVs, a heuristic greedy approach is considered. Unlike most existing scheduling methodologies in the literature, the proposed greedy scheduler is model-free, training-free, and yet efficient. The proposed approach considers different factors such as the electricity price, on-grid EVs state of arrival and departure, and the total revenue to meet the load demands. The greedy-based approach behaves satisfactorily in terms of fulfilling its objective for the hybrid microgrid system, which is established of photovoltaic, wind turbine, and a local utility grid. Meanwhile, the on-grid EVs are being utilized as an energy storage exchange location. A real time hardware-in-the-loop experimentation is comprehensively conducted to maximize the earned profit. Through different uncertainty scenarios, the ability of the proposed greedy approach to obtain a global optimal solution is assessed. A data simulator was developed for the purposes of generating evaluation datasets, which captures uncertainties in the behaviors of the system's parameters. The greedy-based strategy is considered applicable, scalable, and efficient in terms of total operating expenditures. Furthermore, as EVs penetration became more versatile, total expenses decreased significantly. Using simulated data of an effective operational duration of 500 years, the proposed approach succeeded in cutting down the energy consumption costs by about 50-85%, beating existing state-of-the-arts results. The proposed approach is proved to be tolerant to the large amounts of uncertainties that are involved in the system's operational data.


Asunto(s)
Electricidad , Heurística , Costos y Análisis de Costo
19.
J Math Ind ; 12(1): 2, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35036278

RESUMEN

In the field of model order reduction for frequency response problems, the minimal rational interpolation (MRI) method has been shown to be quite effective. However, in some cases, numerical instabilities may arise when applying MRI to build a surrogate model over a large frequency range, spanning several orders of magnitude. We propose a strategy to overcome these instabilities, replacing an unstable global MRI surrogate with a union of stable local rational models. The partitioning of the frequency range into local frequency sub-ranges is performed automatically and adaptively, and is complemented by a (greedy) adaptive selection of the sampled frequencies over each sub-range. We verify the effectiveness of our proposed method with two numerical examples.

20.
J Comb Optim ; 44(1): 74-93, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34658658

RESUMEN

Network interdiction problems by upgading critical edges/nodes have important applications to reduce the infectivity of the COVID-19. A network of confirmed cases can be described as a rooted tree that has a weight of infectious intensity for each edge. Upgrading edges (nodes) can reduce the infectious intensity with contacts by taking prevention measures such as disinfection (treating the confirmed cases, isolating their close contacts or vaccinating the uninfected people). We take the sum of root-leaf distance on a rooted tree as the whole infectious intensity of the tree. Hence, we consider the sum of root-leaf distance interdiction problem by upgrading edges/nodes on trees (SDIPT-UE/N). The problem (SDIPT-UE) aims to minimize the sum of root-leaf distance by reducing the weights of some critical edges such that the upgrade cost under some measurement is upper-bounded by a given value. Different from the problem (SDIPT-UE), the problem (SDIPT-UN) aims to upgrade a set of critical nodes to reduce the weights of the edges adjacent to the nodes. The relevant minimum cost problem (MCSDIPT-UE/N) aims to minimize the upgrade cost on the premise that the sum of root-leaf distance is upper-bounded by a given value. We develop different norms to measure the upgrade cost. Under weighted Hamming distance, we show the problems (SDIPT-UE/N) and (MCSDIPT-UE/N) are NP-hard by showing the equivalence of the two problems and the 0-1 knapsack problem. Under weighted l 1 norm, we solve the problems (SDIPT-UE) and (MCSDIPT-UE) in O(n) time by transforimg them into continuous knapsack problems. We propose two linear time greedy algorithms to solve the problem (SDIPT-UE) under unit Hamming distance and the problem (SDIPT-UN) with unit cost, respectively. Furthermore, for the the minimum cost problem (MCSDIPT-UE) under unit Hamming distance and the problem (MCSDIPT-UN) with unit cost, we provide two O ( n log n ) time algorithms by the binary search methods. Finally, we perform some numerical experiments to compare the results obtained by these algorithms.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA