Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 374
Filtrar
1.
Magn Reson Med ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39270130

RESUMEN

PURPOSE: Computational simulation of phase-contrast MRI (PC-MRI) is an attractive way to physically interpret properties and errors in MRI-reconstructed flow velocity fields. Recent studies have developed PC-MRI simulators that solve the Bloch equation, with the magnetization transport being modeled using a Lagrangian approach. Because this method expresses the magnetization as spatial distribution of particles, influences of particle densities and their spatial uniformities on numerical accuracy are well known. This study developed an alternative method for PC-MRI modeling using an Eulerian approach in which the magnetization is expressed as a spatially smooth continuous function. METHODS: The magnetization motion was described using the Bloch equation with an advection term and computed on a fixed grid using a finite difference method, and k-space sampling was implemented using the spoiled gradient echo sequence. PC-MRI scans of a fully developed flow in straight and stenosed cylinders were acquired to provide numerical examples. RESULTS: Reconstructed flow in a straight cylinder showed excellent agreement with input velocity profiles and mean errors were less than 0.5% of the maximum velocity. Numerical cases of flow in a stenosed cylinder successfully demonstrated the velocity profiles, with displacement artifacts being dependent on scan parameters and intravoxel dephasing due to flow disturbances. These results were in good agreement with those obtained using the Lagrangian approach with a sufficient particle density. CONCLUSION: The feasibility of the Eulerian approach to PC-MRI modeling was successfully demonstrated.

2.
Patterns (N Y) ; 5(8): 101024, 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39233696

RESUMEN

In the rapidly evolving field of bioimaging, the integration and orchestration of findable, accessible, interoperable, and reusable (FAIR) image analysis workflows remains a challenge. We introduce BIOMERO (bioimage analysis in OMERO), a bridge connecting OMERO, a renowned bioimaging data management platform; FAIR workflows; and high-performance computing (HPC) environments. BIOMERO facilitates seamless execution of FAIR workflows, particularly for large datasets from high-content or high-throughput screening. BIOMERO empowers researchers by eliminating the need for specialized knowledge, enabling scalable image processing directly from OMERO. BIOMERO notably supports the sharing and utilization of FAIR workflows between OMERO, Cytomine/BIAFLOWS, and other bioimaging communities. BIOMERO will promote the widespread adoption of FAIR workflows, emphasizing reusability, across the realm of bioimaging research. Its user-friendly interface will empower users, including those without technical expertise, to seamlessly apply these workflows to their datasets, democratizing the utilization of AI by the broader research community.

3.
J Appl Crystallogr ; 57(Pt 4): 1217-1228, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39108808

RESUMEN

Presented and discussed here is the implementation of a software solution that provides prompt X-ray diffraction data analysis during fast dynamic compression experiments conducted within the dynamic diamond anvil cell technique. It includes efficient data collection, streaming of data and metadata to a high-performance cluster (HPC), fast azimuthal data integration on the cluster, and tools for controlling the data processing steps and visualizing the data using the DIOPTAS software package. This data processing pipeline is invaluable for a great number of studies. The potential of the pipeline is illustrated with two examples of data collected on ammonia-water mixtures and multiphase mineral assemblies under high pressure. The pipeline is designed to be generic in nature and could be readily adapted to provide rapid feedback for many other X-ray diffraction techniques, e.g. large-volume press studies, in situ stress/strain studies, phase transformation studies, chemical reactions studied with high-resolution diffraction etc.

4.
BMC Bioinformatics ; 25(1): 272, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39169276

RESUMEN

BACKGROUND: The availability of transcriptomic data for species without a reference genome enables the construction of de novo transcriptome assemblies as alternative reference resources from RNA-Seq data. A transcriptome provides direct information about a species' protein-coding genes under specific experimental conditions. The de novo assembly process produces a unigenes file in FASTA format, subsequently targeted for the annotation. Homology-based annotation, a method to infer the function of sequences by estimating similarity with other sequences in a reference database, is a computationally demanding procedure. RESULTS: To mitigate the computational burden, we introduce HPC-T-Annotator, a tool for de novo transcriptome homology annotation on high performance computing (HPC) infrastructures, designed for straightforward configuration via a Web interface. Once the configuration data are given, the entire parallel computing software for annotation is automatically generated and can be launched on a supercomputer using a simple command line. The output data can then be easily viewed using post-processing utilities in the form of Python notebooks integrated in the proposed software. CONCLUSIONS: HPC-T-Annotator expedites homology-based annotation in de novo transcriptome assemblies. Its efficient parallelization strategy on HPC infrastructures significantly reduces computational load and execution times, enabling large-scale transcriptome analysis and comparison projects, while its intuitive graphical interface extends accessibility to users without IT skills.


Asunto(s)
Anotación de Secuencia Molecular , Programas Informáticos , Transcriptoma , Transcriptoma/genética , Anotación de Secuencia Molecular/métodos , Perfilación de la Expresión Génica/métodos , Biología Computacional/métodos , Bases de Datos Genéticas
5.
Front Physiol ; 15: 1408626, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39139481

RESUMEN

Background: Cardiac pacemaking remains an unsolved matter from many perspectives. Extensive experimental and computational studies have been performed to describe the sinoatrial physiology across different scales, from the molecular to clinical levels. Nevertheless, the mechanism by which a heartbeat is generated inside the sinoatrial node and propagated to the working myocardium is not fully understood at present. This work aims to provide quantitative information about this fascinating phenomenon, especially regarding the contributions of cellular heterogeneity and fibroblasts to sinoatrial node automaticity and atrial driving. Methods: We developed a bidimensional computational model of the human right atrial tissue, including the sinoatrial node. State-of-the-art knowledge of the anatomical and physiological aspects was adopted during the design of the baseline tissue model. The novelty of this study is the consideration of cellular heterogeneity and fibroblasts inside the sinoatrial node for investigating the manner by which they tune the robustness of stimulus formation and conduction under different conditions (baseline, ionic current blocks, autonomic modulation, and external high-frequency pacing). Results: The simulations show that both heterogeneity and fibroblasts significantly increase the safety factor for conduction by more than 10% in almost all the conditions tested and shorten the sinus node recovery time after overdrive suppression by up to 60%. In the human model, especially under challenging conditions, the fibroblasts help the heterogeneous myocytes to synchronise their rate (e.g. -82% in σ C L under 25 nM of acetylcholine administration) and capture the atrium (with 25% L-type calcium current block). However, the anatomical and gap junctional coupling aspects remain the most important model parameters that allow effective atrial excitations. Conclusion: Despite the limitations to the proposed model, this work suggests a quantitative explanation to the astonishing overall heterogeneity shown by the sinoatrial node.

6.
J Integr Bioinform ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39092509

RESUMEN

This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.

7.
Sci Rep ; 14(1): 18384, 2024 08 08.
Artículo en Inglés | MEDLINE | ID: mdl-39117762

RESUMEN

The fundamental question of how forces are generated in a motile cell, a lamellipodium, and a comet tail is the subject of this note. It is now well established that cellular motility results from the polymerization of actin, the most abundant protein in eukaryotic cells, into an interconnected set of filaments. We portray this process in a continuum mechanics framework, claiming that polymerization promotes a mechanical swelling in a narrow zone around the nucleation loci, which ultimately results in cellular or bacterial motility. To this aim, a new paradigm in continuum multi-physics has been designed, departing from the well-known theory of Larché-Cahn chemo-transport-mechanics. In this note, we set up the theory of network growth and compare the outcomes of numerical simulations with experimental evidence.


Asunto(s)
Actinas , Movimiento Celular , Actinas/metabolismo , Modelos Biológicos , Citoesqueleto de Actina/metabolismo , Seudópodos/metabolismo , Seudópodos/fisiología , Fenómenos Biomecánicos , Polimerizacion
8.
Open Res Eur ; 4: 165, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39210980

RESUMEN

Ab initio electronic structure applications are among the most widely used in High-Performance Computing (HPC), and the eigenvalue problem is often their main computational bottleneck. This article presents our initial efforts in porting these codes to a RISC-V prototype platform leveraging a wide Vector Processing Unit (VPU). Our software tester is based on a mini-app extracted from the ELPA eigensolver library. The user-space Vehave and a RISC-V vector architecture implemented on an FPGA were tested. Metrics from both systems and different vectorisation strategies were extracted, ranging from the most simple and portable one (using autovectorisation and assisting this by fusing loops in the code) to the more complex one (using intrinsics). We observed a progressive reduction in the number of vectorial instructions, executed instructions and computing cycles with the different methodologies, which will lead to a substantial speed-up in the calculations. The obtained outcomes are crucial in advancing the porting of computational materials and molecular science codes to (post)-exascale architectures using RISC-V-based technologies fully developed within the EU. Our evaluation also provides valuable feedback for hardware designers, engineers and compiler developers, making this use case pivotal for co-design efforts.

9.
J Comput Chem ; 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39215569

RESUMEN

We present ichor, an open-source Python library that simplifies data management in computational chemistry and streamlines machine learning force field development. Ichor implements many easily extensible file management tools, in addition to a lazy file reading system, allowing efficient management of hundreds of thousands of computational chemistry files. Data from calculations can be readily stored into databases for easy sharing and post-processing. Raw data can be directly processed by ichor to create machine learning-ready datasets. In addition to powerful data-related capabilities, ichor provides interfaces to popular workload management software employed by High Performance Computing clusters, making for effortless submission of thousands of separate calculations with only a single line of Python code. Furthermore, a simple-to-use command line interface has been implemented through a series of menu systems to further increase accessibility and efficiency of common important ichor tasks. Finally, ichor implements general tools for visualization and analysis of datasets and tools for measuring machine-learning model quality both on test set data and in simulations. With the current functionalities, ichor can serve as an end-to-end data procurement, data management, and analysis solution for machine-learning force-field development.

10.
Arch Public Health ; 82(Suppl 1): 142, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39198864

RESUMEN

Artificial Intelligence (AI) is already a reality in health systems, bringing benefits to patients, healthcare providers, and other stakeholders in the health care. To further leverage AI in health, Belgium is advised to make policy-level decisions about how to fund, design and undertake actions focussing on data access and inclusion, IT-infrastructure, legal and ethical frameworks, public and professional trust, in addition to education and interpretation. EU initiatives, such as European Health data space (EHDS), the Genomics Data Infrastructure (GDI) and the EU Cancer Imaging Infrastructure (EUCAIM) are building EU data infrastructures. To continue these positive developments, Belgium should continue to invest and support existing European data infrastructures. At the national level, a clear vision and strategy need to be developed and infrastructures need to be harmonized at the European level.

11.
J Cheminform ; 16(1): 86, 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39075588

RESUMEN

Every year, more than 19 million cancer cases are diagnosed, and this number continues to increase annually. Since standard treatment options have varying success rates for different types of cancer, understanding the biology of an individual's tumour becomes crucial, especially for cases that are difficult to treat. Personalised high-throughput profiling, using next-generation sequencing, allows for a comprehensive examination of biopsy specimens. Furthermore, the widespread use of this technology has generated a wealth of information on cancer-specific gene alterations. However, there exists a significant gap between identified alterations and their proven impact on protein function. Here, we present a bioinformatics pipeline that enables fast analysis of a missense mutation's effect on stability and function in known oncogenic proteins. This pipeline is coupled with a predictor that summarises the outputs of different tools used throughout the pipeline, providing a single probability score, achieving a balanced accuracy above 86%. The pipeline incorporates a virtual screening method to suggest potential FDA/EMA-approved drugs to be considered for treatment. We showcase three case studies to demonstrate the timely utility of this pipeline. To facilitate access and analysis of cancer-related mutations, we have packaged the pipeline as a web server, which is freely available at https://loschmidt.chemi.muni.cz/predictonco/ .Scientific contributionThis work presents a novel bioinformatics pipeline that integrates multiple computational tools to predict the effects of missense mutations on proteins of oncological interest. The pipeline uniquely combines fast protein modelling, stability prediction, and evolutionary analysis with virtual drug screening, while offering actionable insights for precision oncology. This comprehensive approach surpasses existing tools by automating the interpretation of mutations and suggesting potential treatments, thereby striving to bridge the gap between sequencing data and clinical application.

12.
Open Res Eur ; 4: 35, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38974408

RESUMEN

This article introduces a suite of mini-applications (mini-apps) designed to optimise computational kernels in ab initio electronic structure codes. The suite is developed from flagship applications participating in the NOMAD Center of Excellence, such as the ELPA eigensolver library and the GW implementations of the exciting, Abinit, and FHI-aims codes. The mini-apps were identified by targeting functions that significantly contribute to the total execution time in the parent applications. This strategic selection allows for concentrated optimisation efforts. The suite is designed for easy deployment on various High-Performance Computing (HPC) systems, supported by an integrated CMake build system for straightforward compilation and execution. The aim is to harness the capabilities of emerging (post)exascale systems, which necessitate concurrent hardware and software development - a concept known as co-design. The mini-app suite serves as a tool for profiling and benchmarking, providing insights that can guide both software optimisation and hardware design. Ultimately, these developments will enable more accurate and efficient simulations of novel materials, leveraging the full potential of exascale computing in material science research.

13.
Sci Rep ; 14(1): 16574, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39020056

RESUMEN

The irregular distribution of non-zero elements of large-scale sparse matrix leads to low data access efficiency caused by the unique architecture of the Sunway many-core processor, which brings great challenges to the efficient implementation of sparse matrix-vector multiplication (SpMV) computing by SW26010P many-core processor. To address this problem, a study of SpMV optimization strategies is carried out based on the SW26010P many-core processor. Firstly, we design a memorized data storage transformation strategy to transform the matrix in CSR storage format into BCSR (Block Compressed Sparse Row) storage. Secondly, the dynamic task scheduling method is introduced to the algorithm to realize the load balance between slave cores. Thirdly, the LDM memory is refined and designed, and the slave core dual cache strategy is optimized to further improve the performance. Finally, we selected a large number of representative sparse matrices from the Matrix Market for testing. The results show that the scheme has obviously speedup the processing procedure of sparse matrices with various sizes and sizes, and the master-slave speedup ratio can reach up to 38 times. The optimization method used in this paper has implications for other complex applications of the SW26010P many-core processor.

14.
Acta Crystallogr D Struct Biol ; 80(Pt 6): 439-450, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38832828

RESUMEN

The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.


Asunto(s)
Programas Informáticos , Biología Computacional/métodos
15.
Philos Trans A Math Phys Eng Sci ; 382(2275): 20230305, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-38910407

RESUMEN

Physical mechanisms that contribute to the generation of fracture waves in condensed media under intensive dynamic impacts have not been fully studied. One of the hypotheses is that this process is associated with the blocky structure of a material. As the loading wave passes, the compliant interlayers between blocks are fractured, releasing the energy of self-balanced initial stresses in the blocks, which supports the motion of the fracture wave. We propose a new efficient numerical method for the analysis of the wave nature of the propagation of a system of cracks in thin interlayers of a blocky medium with complex rheological properties. The method is based on a variational formulation of the constitutive relations for the deformation of elastic-plastic materials, as well as the conditions for contact interaction of blocks through interlayers. We have developed a parallel computational algorithm that implements this method for supercomputers with cluster architecture. The results of the numerical simulation of the fracture wave propagation in tempered glass under the action of distributed pulse disturbances are presented. This article is part of the theme issue 'Non-smooth variational problems with applications in mechanics'.

16.
F1000Res ; 13: 203, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38868668

RESUMEN

Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.


Asunto(s)
Nube Computacional , Carga de Trabajo , Flujo de Trabajo
17.
Sci Rep ; 14(1): 14579, 2024 06 25.
Artículo en Inglés | MEDLINE | ID: mdl-38918413

RESUMEN

Understanding the genetic basis of complex diseases is one of the most important challenges in current precision medicine. To this end, Genome-Wide Association Studies aim to correlate Single Nucleotide Polymorphisms (SNPs) to the presence or absence of certain traits. However, these studies do not consider interactions between several SNPs, known as epistasis, which explain most genetic diseases. Analyzing SNP combinations to detect epistasis is a major computational task, due to the enormous search space. A possible solution is to employ deep learning strategies for genomic prediction, but the lack of explainability derived from the black-box nature of neural networks is a challenge yet to be addressed. Herein, a novel, flexible, portable, and scalable framework for network interpretation based on transformers is proposed to tackle any-order epistasis. The results on various epistasis scenarios show that the proposed framework outperforms state-of-the-art methods for explainability, while being scalable to large datasets and portable to various deep learning accelerators. The proposed framework is validated on three WTCCC datasets, identifying SNPs related to genes known in the literature that have direct relationships with the studied diseases.


Asunto(s)
Epistasis Genética , Estudio de Asociación del Genoma Completo , Polimorfismo de Nucleótido Simple , Humanos , Estudio de Asociación del Genoma Completo/métodos , Aprendizaje Profundo , Redes Neurales de la Computación , Biología Computacional/métodos , Algoritmos
18.
BMC Bioinformatics ; 25(1): 199, 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38789933

RESUMEN

BACKGROUND: Computational models in systems biology are becoming more important with the advancement of experimental techniques to query the mechanistic details responsible for leading to phenotypes of interest. In particular, Boolean models are well fit to describe the complexity of signaling networks while being simple enough to scale to a very large number of components. With the advance of Boolean model inference techniques, the field is transforming from an artisanal way of building models of moderate size to a more automatized one, leading to very large models. In this context, adapting the simulation software for such increases in complexity is crucial. RESULTS: We present two new developments in the continuous time Boolean simulators: MaBoSS.MPI, a parallel implementation of MaBoSS which can exploit the computational power of very large CPU clusters, and MaBoSS.GPU, which can use GPU accelerators to perform these simulations. CONCLUSION: These implementations enable simulation and exploration of the behavior of very large models, thus becoming a valuable analysis tool for the systems biology community.


Asunto(s)
Simulación por Computador , Programas Informáticos , Biología de Sistemas/métodos , Biología Computacional/métodos , Algoritmos , Gráficos por Computador
19.
PNAS Nexus ; 3(5): pgae160, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38711809

RESUMEN

Ultracold atoms provide a platform for analog quantum computer capable of simulating the quantum turbulence that underlies puzzling phenomena like pulsar glitches in rapidly spinning neutron stars. Unlike other platforms like liquid helium, ultracold atoms have a viable theoretical framework for dynamics, but simulations push the edge of current classical computers. We present the largest simulations of fermionic quantum turbulence to date and explain the computing technology needed, especially improvements in the Eigenvalue soLvers for Petaflop Applications library that enable us to diagonalize matrices of record size (millions by millions). We quantify how dissipation and thermalization proceed in fermionic quantum turbulence by using the internal structure of vortices as a new probe of the local effective temperature. All simulation data and source codes are made available to facilitate rapid scientific progress in the field of ultracold Fermi gases.

20.
Curr Opin Struct Biol ; 87: 102817, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38795562

RESUMEN

New high-performance computing architectures are becoming operative, in addition to exascale computers. Quantum computers (QC) solve optimization problems with unprecedented efficiency and speed, while neuromorphic hardware (NMH) simulates neural network dynamics. Albeit, at the moment, both find no practical use in all atom biomolecular simulations, QC might be exploited in the not-too-far future to simulate systems for which electronic degrees of freedom play a key and intricate role for biological function, whereas NMH might accelerate molecular dynamics simulations with low energy consumption. Machine learning and artificial intelligence algorithms running on NMH and QC could assist in the analysis of data and speed up research. If these implementations are successful, modular supercomputing could further dramatically enhance the overall computing capacity by combining highly optimized software tools into workflows, linking these architectures to exascale computers.


Asunto(s)
Redes Neurales de la Computación , Teoría Cuántica , Simulación de Dinámica Molecular , Aprendizaje Automático , Programas Informáticos , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA