Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
BMC Med Imaging ; 24(1): 186, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054419

RESUMEN

Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that affects an individual's behavior, speech, and social interaction. Early and accurate diagnosis of ASD is pivotal for successful intervention. The limited availability of large datasets for neuroimaging investigations, however, poses a significant challenge to the timely and precise identification of ASD. To address this problem, we propose a breakthrough approach, GARL, for ASD diagnosis using neuroimaging data. GARL innovatively integrates the power of GANs and Deep Q-Learning to augment limited datasets and enhance diagnostic precision. We utilized the Autistic Brain Imaging Data Exchange (ABIDE) I and II datasets and employed a GAN to expand these datasets, creating a more robust and diversified dataset for analysis. This approach not only captures the underlying sample distribution within ABIDE I and II but also employs deep reinforcement learning for continuous self-improvement, significantly enhancing the capability of the model to generalize and adapt. Our experimental results confirmed that GAN-based data augmentation effectively improved the performance of all prediction models on both datasets, with the combination of InfoGAN and DQN's GARL yielding the most notable improvement.


Asunto(s)
Trastorno del Espectro Autista , Aprendizaje Profundo , Neuroimagen , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Neuroimagen/métodos , Niño , Redes Neurales de la Computación , Masculino , Encéfalo/diagnóstico por imagen
2.
Comput Biol Med ; 178: 108694, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38870728

RESUMEN

Telemedicine is an emerging development in the healthcare domain, where the Internet of Things (IoT) fiber optics technology assists telemedicine applications to improve overall digital healthcare performances for society. Telemedicine applications are bowel disease monitoring based on fiber optics laser endoscopy, gastrointestinal disease fiber optics lights, remote doctor-patient communication, and remote surgeries. However, many existing systems are not effective and their approaches based on deep reinforcement learning have not obtained optimal results. This paper presents the fiber optics IoT healthcare system based on deep reinforcement learning combinatorial constraint scheduling for hybrid telemedicine applications. In the proposed system, we propose the adaptive security deep q-learning network (ASDQN) algorithm methodology to execute all telemedicine applications under their given quality of services (deadline, latency, security, and resources) constraints. For the problem solution, we have exploited different fiber optics endoscopy datasets with images, video, and numeric data for telemedicine applications. The objective is to minimize the overall latency of telemedicine applications (e.g., local, communication, and edge nodes) and maximize the overall rewards during offloading and scheduling on different nodes. The simulation results show that ASDQN outperforms all telemedicine applications with their QoS and objectives compared to existing state action reward state (SARSA) and deep q-learning network (DQN) policy during execution and scheduling on different nodes.


Asunto(s)
Aprendizaje Profundo , Internet de las Cosas , Telemedicina , Humanos , Tecnología de Fibra Óptica , Algoritmos
3.
Biomimetics (Basel) ; 9(6)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38921187

RESUMEN

In the complex and dynamic landscape of cyber threats, organizations require sophisticated strategies for managing Cybersecurity Operations Centers and deploying Security Information and Event Management systems. Our study enhances these strategies by integrating the precision of well-known biomimetic optimization algorithms-namely Particle Swarm Optimization, the Bat Algorithm, the Gray Wolf Optimizer, and the Orca Predator Algorithm-with the adaptability of Deep Q-Learning, a reinforcement learning technique that leverages deep neural networks to teach algorithms optimal actions through trial and error in complex environments. This hybrid methodology targets the efficient allocation and deployment of network intrusion detection sensors while balancing cost-effectiveness with essential network security imperatives. Comprehensive computational tests show that versions enhanced with Deep Q-Learning significantly outperform their native counterparts, especially in complex infrastructures. These results highlight the efficacy of integrating metaheuristics with reinforcement learning to tackle complex optimization challenges, underscoring Deep Q-Learning's potential to boost cybersecurity measures in rapidly evolving threat environments.

4.
Sensors (Basel) ; 24(6)2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38544109

RESUMEN

To address traffic flow fluctuations caused by changes in traffic signal control schemes on tidal lanes and maintain smooth traffic operations, this paper proposes a method for controlling traffic signal transitions on tidal lanes. Firstly, the proposed method includes designing an intersection overlap phase scheme based on the traffic flow conflict matrix in the tidal lane scenario and a fast and smooth transition method for key intersections based on the flow ratio. The aim of the control is to equalize average queue lengths and minimize average vehicle delays for different flow directions at the intersection. This study also analyses various tidal lane scenarios based on the different opening states of the tidal lanes at related intersections. The transitions of phase offsets are emphasized after a comprehensive analysis of transition time and smoothing characteristics. In addition, this paper proposes a coordinated method for tidal lanes to optimize the phase offset at arterial intersections for smooth and rapid transitions. The method uses Deep Q-Learning, a reinforcement learning algorithm for optimal action selection (OSA), to develop an adaptive traffic signal transition control and enhance its efficiency. Finally, a simulation experiment using a traffic control interface is presented to validate the proposed approach. This study shows that this method leads to smoother and faster traffic signal transitions across different tidal lane scenarios compared to the conventional method. Implementing this solution can benefit intersection groups by reducing traffic delays, improving traffic efficiency, and decreasing air pollution caused by congestion.

5.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38544128

RESUMEN

With the exponential growth of wireless devices and the demand for real-time processing, traditional server architectures face challenges in meeting the ever-increasing computational requirements. This paper proposes a collaborative edge computing framework to offload and process tasks efficiently in such environments. By equipping a moving unmanned aerial vehicle (UAV) as the mobile edge computing (MEC) server, the proposed architecture aims to release the burden on roadside units (RSUs) servers. Specifically, we propose a two-layer edge intelligence scheme to allocate network computing resources. The first layer intelligently offloads and allocates tasks generated by wireless devices in the vehicular system, and the second layer utilizes the partially observable stochastic game (POSG), solved by duelling deep Q-learning, to allocate the computing resources of each processing node (PN) to different tasks. Meanwhile, we propose a weighted position optimization algorithm for the UAV movement in the system to facilitate task offloading and task processing. Simulation results demonstrate the improved performance by applying the proposed scheme.

6.
Int J Comput Assist Radiol Surg ; 19(6): 995-1002, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38411781

RESUMEN

PURPOSE: Traditional techniques for automating the planning of brain electrode placement based on multi-objective optimization involving many parameters are subject to limitations, especially in terms of sensitivity to local optima, and tend to be replaced by machine learning approaches. This paper explores the feasibility of using deep reinforcement learning (DRL) in this context, starting with the single-electrode use-case of deep brain stimulation (DBS). METHODS: We propose a DRL approach based on deep Q-learning where the states represent the electrode trajectory and associated information, and actions are the possible motions. Deep neural networks allow to navigate the complex state space derived from MRI data. The chosen reward function emphasizes safety and accuracy in reaching the target structure. The results were compared with a reference (segmented electrode) and a conventional technique. RESULTS: The DRL approach excelled in navigating the complex anatomy, consistently providing safer and more precise electrode placements than the reference. Compared to conventional techniques, it showed an improvement in accuracy of 2.3% in average proximity to obstacles and 19.4% in average orientation angle. Expectedly, computation times rose significantly, from 2 to 18 min. CONCLUSION: Our investigation into DRL for DBS electrode trajectory planning has showcased its promising potential. Despite only delivering modest accuracy gains compared to traditional methods in the single-electrode case, its relevance for problems with high-dimensional state and action spaces and its resilience against local optima highlight its promising role for complex scenarios. This preliminary study constitutes a first step toward the more challenging problem of multiple-electrodes planning.


Asunto(s)
Estimulación Encefálica Profunda , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Estimulación Encefálica Profunda/métodos , Imagen por Resonancia Magnética/métodos , Estudios de Factibilidad , Electrodos Implantados , Refuerzo en Psicología
7.
Nano Lett ; 23(24): 11685-11692, 2023 Dec 27.
Artículo en Inglés | MEDLINE | ID: mdl-38060838

RESUMEN

The rapid development of 6G communications using terahertz (THz) electromagnetic waves has created a demand for highly sensitive THz nanoresonators capable of detecting these waves. Among the potential candidates, THz nanogap loop arrays show promising characteristics but require significant computational resources for accurate simulation. This requirement arises because their unit cells are 10 times smaller than millimeter wavelengths, with nanogap regions that are 1 000 000 times smaller. To address this challenge, we propose a rapid inverse design method using physics-informed machine learning, employing double deep Q-learning with an analytical model of the THz nanogap loop array. In ∼39 h on a middle-level personal computer, our approach identifies the optimal structure through 200 000 iterations, achieving an experimental electric field enhancement of 32 000 at 0.2 THz, 300% stronger than prior results. Our analytical model-based approach significantly reduces the amount of computational resources required, offering a practical alternative to numerical simulation-based inverse design for THz nanodevices.

8.
Environ Monit Assess ; 195(11): 1389, 2023 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-37903916

RESUMEN

Ensuring the classification of water bodies suitable for fish habitat is essential for animal preservation and commercial fish farming. However, existing supervised machine learning models for predicting water quality lack specificity regarding fish survival. This study addresses this limitation and presents a novel model for forecasting fish viability in open aquaculture ecosystems. The proposed model combines reinforcement learning through Q-learning and deep feed-forward neural networks, enabling it to capture intricate patterns and relationships in complex aquatic environments. Moreover, the model's reinforcement learning capability reduces the reliance on labeled data and offers potential for continuous improvement over time. By accurately classifying water bodies based on fish suitability, the proposed model provides valuable insights for sustainable aquaculture management and environmental conservation. Experimental results show a significantly improved accuracy of 96% for the proposed DQN-based model, outperforming existing Gaussian Naive Bayes (78%), Random Forest (86%), and K-Nearest Neighbors (92%) classifiers on the same dataset. These findings highlight the effectiveness of the proposed approach in forecasting fish viability and its potential to address the limitations of existing models.


Asunto(s)
Ecosistema , Monitoreo del Ambiente , Animales , Teorema de Bayes , Redes Neurales de la Computación , Peces , Explotaciones Pesqueras
9.
Sensors (Basel) ; 23(15)2023 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-37571579

RESUMEN

Aiming to address the limitations of traditional resource allocation algorithms in the Internet of Vehicles (IoV), whereby they cannot meet the stringent demands for ultra-low latency and high reliability in vehicle-to-vehicle (V2V) communication, this paper proposes a wireless resource allocation algorithm for V2V communication based on the multi-agent deep Q-network (MDQN). The system model utilizes 5G network slicing technology as its fundamental feature and maximizes the weighted spectrum-energy efficiency (SEE) while satisfying reliability and latency constraints. In this approach, each V2V link is treated as an agent, and the state space, action, and reward function of MDQN are specifically designed. Through centralized training, the neural network parameters of MDQN are determined, and the optimal resource allocation strategy is achieved through distributed execution. Simulation results demonstrate the effectiveness of the proposed scheme in significantly improving the SEE of the network while maintaining a certain success rate for V2V link load transmission.

10.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-37571695

RESUMEN

The Federated Cloud Computing (FCC) paradigm provides scalability advantages to Cloud Service Providers (CSP) in preserving their Service Level Agreement (SLA) as opposed to single Data Centers (DC). However, existing research has primarily focused on Virtual Machine (VM) placement, with less emphasis on energy efficiency and SLA adherence. In this paper, we propose a novel solution, Federated Cloud Workload Prediction with Deep Q-Learning (FEDQWP). Our solution addresses the complex VM placement problem, energy efficiency, and SLA preservation, making it comprehensive and beneficial for CSPs. By leveraging the capabilities of deep learning, our FEDQWP model extracts underlying patterns and optimizes resource allocation. Real-world workloads are extensively evaluated to demonstrate the efficacy of our approach compared to existing solutions. The results show that our DQL model outperforms other algorithms in terms of CPU utilization, migration time, finished tasks, energy consumption, and SLA violations. Specifically, our QLearning model achieves efficient CPU utilization with a median value of 29.02, completes migrations in an average of 0.31 units, finishes an average of 699 tasks, consumes the least energy with an average of 1.85 kWh, and exhibits the lowest number of SLA violations with an average of 0.03 violations proportionally. These quantitative results highlight the superiority of our proposed method in optimizing performance in FCC environments.

11.
Quant Imaging Med Surg ; 13(8): 4879-4896, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37581036

RESUMEN

Background: Estimation of the global optima of multiple model parameters is valuable for precisely extracting parameters that characterize a physical environment. This is especially useful for imaging purposes, to form reliable, meaningful physical images with good reproducibility. However, it is challenging to avoid different local minima when the objective function is nonconvex. The problem of global searching of multiple parameters was formulated to be a k-D move in the parameter space and the parameter updating scheme was converted to be a state-action decision-making problem. Methods: We proposed a novel Deep Q-learning of Model Parameters (DQMP) method for global optimization which updated the parameter configurations through actions that maximized the Q-value and employed a Deep Reward Network (DRN) designed to learn global reward values from both visible fitting errors and hidden parameter errors. The DRN was constructed with Long Short-Term Memory (LSTM) layers followed by fully connected layers and a rectified linear unit (ReLU) nonlinearity. The depth of the DRN depended on the number of parameters. Through DQMP, the k-D parameter search in each step resembled the decision-making of action selections from 3k configurations in a k-D board game. Results: The DQMP method was evaluated by widely used general functions that can express a variety of experimental data and further validated on imaging applications. The convergence of the proposed DRN was evaluated, which showed that the loss values of six general functions all converged after 12 epochs. The parameters estimated by the DQMP method had relative errors of less than 4% for all cases, whereas the relative errors achieved by Q-learning (QL) and the Least Squares Method (LSM) were 17% and 21%, respectively. Furthermore, the imaging experiments demonstrated that the imaging of the parameters estimated by the proposed DQMP method were the closest to the ground truth simulation images when compared to other methods. Conclusions: The proposed DQMP method was able to achieve global optima, thus yielding accurate model parameter estimates. DQMP is promising for estimating multiple high-dimensional parameters and can be generalized to global optimization for many other complex nonconvex functions and imaging of physical parameters.

12.
Sensors (Basel) ; 23(10)2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37430735

RESUMEN

This paper investigates the problem of buffer-aided relay selection to achieve reliable and secure communications in a two-hop amplify-and-forward (AF) network with an eavesdropper. Due to the fading of wireless signals and the broadcast nature of wireless channels, transmitted signals over the network may be undecodable at the receiver end or have been eavesdropped by eavesdroppers. Most available buffer-aided relay selection schemes consider either reliability or security issues in wireless communications; rarely is work conducted on both reliability and security issues. This paper proposes a buffer-aided relay selection scheme based on deep Q-learning (DQL) that considers both reliability and security. By conducting Monte Carlo simulations, we then verify the reliability and security performances of the proposed scheme in terms of the connection outage probability (COP) and secrecy outage probability (SOP), respectively. The simulation results show that two-hop wireless relay network can achieve reliable and secure communications by using our proposed scheme. We also performed comparison experiments between our proposed scheme and two benchmark schemes. The comparison results indicate that our proposed scheme outperforms the max-ratio scheme in terms of the SOP.

13.
Front Plant Sci ; 14: 1142957, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37484461

RESUMEN

This study proposes an adaptive image augmentation scheme using deep reinforcement learning (DRL) to improve the performance of a deep learning-based automated optical inspection system. The study addresses the challenge of inconsistency in the performance of single image augmentation methods. It introduces a DRL algorithm, DQN, to select the most suitable augmentation method for each image. The proposed approach extracts geometric and pixel indicators to form states, and uses DeepLab-v3+ model to verify the augmented images and generate rewards. Image augmentation methods are treated as actions, and the DQN algorithm selects the best methods based on the images and segmentation model. The study demonstrates that the proposed framework outperforms any single image augmentation method and achieves better segmentation performance than other semantic segmentation models. The framework has practical implications for developing more accurate and robust automated optical inspection systems, critical for ensuring product quality in various industries. Future research can explore the generalizability and scalability of the proposed framework to other domains and applications. The code for this application is uploaded at https://github.com/lynnkobe/Adaptive-Image-Augmentation.git.

14.
PeerJ Comput Sci ; 9: e1356, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346708

RESUMEN

Music composition is a complex field that is difficult to automate because the computational definition of what is good or aesthetically pleasing is vague and subjective. Many neural network-based methods have been applied in the past, but they lack consistency and in most cases, their outputs fail to impress. The most common issues include excessive repetition and a lack of style and structure, which are hallmarks of artificial compositions. In this project, we build on a model created by Magenta-the RL Tuner-extending it to emulate a specific musical genre-the Galician Xota. To do this, we design a new rule-set containing rules that the composition should follow to adhere to this style. We then implement them using reward functions, which are used to train the Deep Q Network that will be used to generate the pieces. After extensive experimentation, we achieve an implementation of our rule-set that effectively enforces each rule on the generated compositions, and outline a solid research methodology for future researchers looking to use this architecture. Finally, we propose some promising future work regarding further applications for this model and improvements to the experimental procedure.

15.
Diagnostics (Basel) ; 13(8)2023 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-37189591

RESUMEN

While the world is working quietly to repair the damage caused by COVID-19's widespread transmission, the monkeypox virus threatens to become a global pandemic. There are several nations that report new monkeypox cases daily, despite the virus being less deadly and contagious than COVID-19. Monkeypox disease may be detected using artificial intelligence techniques. This paper suggests two strategies for improving monkeypox image classification precision. Based on reinforcement learning and parameter optimization for multi-layer neural networks, the suggested approaches are based on feature extraction and classification: the Q-learning algorithm determines the rate at which an act occurs in a particular state; Malneural networks are binary hybrid algorithms that improve the parameters of neural networks. The algorithms are evaluated using an openly available dataset. In order to analyze the proposed optimization feature selection for monkeypox classification, interpretation criteria were utilized. In order to evaluate the efficiency, significance, and robustness of the suggested algorithms, a series of numerical tests were conducted. There were 95% precision, 95% recall, and 96% f1 scores for monkeypox disease. As compared to traditional learning methods, this method has a higher accuracy value. The overall macro average was around 0.95, and the overall weighted average was around 0.96. When compared to the benchmark algorithms, DDQN, Policy Gradient, and Actor-Critic, the Malneural network had the highest accuracy (around 0.985). In comparison with traditional methods, the proposed methods were found to be more effective. Clinicians can use this proposal to treat monkeypox patients and administration agencies can use it to observe the origin and current status of the disease.

16.
Sensors (Basel) ; 23(6)2023 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-36991742

RESUMEN

With the rise of Industry 4.0 and artificial intelligence, the demand for industrial automation and precise control has increased. Machine learning can reduce the cost of machine parameter tuning and improve high-precision positioning motion. In this study, a visual image recognition system was used to observe the displacement of an XXY planar platform. Ball-screw clearance, backlash, nonlinear frictional force, and other factors affect the accuracy and reproducibility of positioning. Therefore, the actual positioning error was determined by inputting images captured by a charge-coupled device camera into a reinforcement Q-learning algorithm. Time-differential learning and accumulated rewards were used to perform Q-value iteration to enable optimal platform positioning. A deep Q-network model was constructed and trained through reinforcement learning for effectively estimating the XXY platform's positioning error and predicting the command compensation according to the error history. The constructed model was validated through simulations. The adopted methodology can be extended to other control applications based on the interaction between feedback measurement and artificial intelligence.

17.
Sensors (Basel) ; 23(4)2023 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-36850792

RESUMEN

Adaptation of handover parameters in ultra-dense networks has always been one of the key issues in optimizing network performance. Aiming at the optimization goal of effective handover ratio, this paper proposes a deep Q-learning (DQN) method that dynamically selects handover parameters according to wireless signal fading conditions. This approach seeks good backward compatibility. In order to enhance the efficiency and performance of the DQN method, Long Short Term Memory (LSTM) is used to build a digital twin and assist the DQN algorithm to achieve a more efficient search. Simulation experiments prove that the enhanced method has a faster convergence speed than the ordinary DQN method, and at the same time, achieves an average effective handover ratio increase of 2.7%. Moreover, in different wireless signal fading intervals, the method proposed in this paper has achieved better performance.

18.
Front Robot AI ; 9: 880547, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36226257

RESUMEN

Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people's faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way.

19.
Entropy (Basel) ; 24(8)2022 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-36010832

RESUMEN

This paper addresses the problem of detecting multiple static and mobile targets by an autonomous mobile agent acting under uncertainty. It is assumed that the agent is able to detect targets at different distances and that the detection includes errors of the first and second types. The goal of the agent is to plan and follow a trajectory that results in the detection of the targets in a minimal time. The suggested solution implements the approach of deep Q-learning applied to maximize the cumulative information gain regarding the targets' locations and minimize the trajectory length on the map with a predefined detection probability. The Q-learning process is based on a neural network that receives the agent location and current probability map and results in the preferred move of the agent. The presented procedure is compared with the previously developed techniques of sequential decision making, and it is demonstrated that the suggested novel algorithm strongly outperforms the existing methods.

20.
Biomolecules ; 12(6)2022 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-35740872

RESUMEN

The drug repurposing of known approved drugs (e.g., lopinavir/ritonavir) has failed to treat SARS-CoV-2-infected patients. Therefore, it is important to generate new chemical entities against this virus. As a critical enzyme in the lifecycle of the coronavirus, the 3C-like main protease (3CLpro or Mpro) is the most attractive target for antiviral drug design. Based on a recently solved structure (PDB ID: 6LU7), we developed a novel advanced deep Q-learning network with a fragment-based drug design (ADQN-FBDD) for generating potential lead compounds targeting SARS-CoV-2 3CLpro. We obtained a series of derivatives from the lead compounds based on our structure-based optimization policy (SBOP). All of the 47 lead compounds obtained directly with our AI model and related derivatives based on the SBOP are accessible in our molecular library. These compounds can be used as potential candidates by researchers to develop drugs against SARS-CoV-2.


Asunto(s)
Tratamiento Farmacológico de COVID-19 , SARS-CoV-2 , Inteligencia Artificial , Proteasas 3C de Coronavirus , Cisteína Endopeptidasas/química , Humanos , Simulación del Acoplamiento Molecular , Inhibidores de Proteasas/química , Inhibidores de Proteasas/farmacología , Proteínas no Estructurales Virales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA