Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38659836

RESUMEN

Motivation: 0.1The Genomic Data Commons is a powerful resource which facilitates the exploration of molecular alterations across various diseases. However, utilizing this resource for meta-analysis requires many different tools to query, download, organize, and analyze the data. In order to facilitate a more rapid, simple means of analyzing DNA methylation and RNA sequencing datasets from the GDC we developed autogdc, a python package that integrates data curation and preprocessing with meta-analysis functionality into one simplified bioinformatic pipeline. Availability and Implementation: 0.2The autogdc python package is available under the GPLv3 license at along with several examples of typical use-case scenarios in the form of a jupyter notebook. The data is all originally provided by the GDC, and is therefore available under the NIH Genomic Data Sharing (GDS) and NCI GDS policies.

2.
JAMIA Open ; 7(2): ooae025, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38617994

RESUMEN

Objectives: A data commons is a software platform for managing, curating, analyzing, and sharing data with a community. The Pandemic Response Commons (PRC) is a data commons designed to provide a data platform for researchers studying an epidemic or pandemic. Methods: The PRC was developed using the open source Gen3 data platform and is based upon consortium, data, and platform agreements developed by the not-for-profit Open Commons Consortium. A formal consortium of Chicagoland area organizations was formed to develop and operate the PRC. Results: The consortium developed a general PRC and an instance of it for the Chicagoland region called the Chicagoland COVID-19 Commons. A Gen3 data platform was set up and operated with policies, procedures, and controls for a NIST SP 800-53 revision 4 Moderate system. A consensus data model for the commons was developed, and a variety of datasets were curated, harmonized and ingested, including statistical summary data about COVID cases, patient level clinical data, and SARS-CoV-2 viral variant data. Discussion and conclusions: Given the various legal and data agreements required to operate a data commons, a PRC is designed to be in place and operating at a low level prior to the occurrence of an epidemic, with the activities increasing as required during an epidemic. A regional instance of a PRC can also be part of a broader data ecosystem or data mesh consisting of multiple regional commons supporting pandemic response through sharing regional data.

3.
JAMIA Open ; 7(1): ooae004, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38304249

RESUMEN

Objective: The Pediatric Cancer Data Commons (PCDC)-a project of Data for the Common Good-houses clinical pediatric oncology data and utilizes the open-source Gen3 platform. To meet the needs of end users, the PCDC development team expanded the out-of-box functionality and developed additional custom features that should be useful to any group developing similar data commons. Materials and Methods: Modifications of the PCDC data portal software were implemented to facilitate desired functionality. Results: Newly developed functionality includes updates to authorization methods, expansion of filtering capabilities, and addition of data analysis functions. Discussion: We describe the process by which custom functionalities were developed. Features are open source and available to be implemented and adapted to suit needs of data portals that utilize the Gen3 platform. Conclusion: Data portals are indispensable tools for facilitating data sharing. Open-source infrastructure facilitates a modular and collaborative approach for meeting needs of end users and stakeholders.

4.
Stud Health Technol Inform ; 310: 3-7, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269754

RESUMEN

Modern clinical studies collect longitudinal and multimodal data about participants, treatments and responses, biospecimens, and molecular and multiomics data. Such rich and complex data requires new common data models (CDM) to support data dissemination and research collaboration. We have developed the ARDaC CDM for the Alcoholic Hepatitis Network (AlcHepNet) Research Data Commons (ARDaC) to support clinical studies and translational research in the national AlcHepNet consortium. The ARDaC CDM bridges the gap between the data models used by the AlcHepNet electronic data capture platform (REDCap) and the Genomic Data Commons (GDC) data model used by the Gen3 data commons framework. It extends the GDC data model for clinical studies; facilitates the harmonization of research data across consortia and programs; and supports the development of the ARDaC. ARDaC CDM is designed as a general and extensible CDM for addressing the needs of modern clinical studies. The ARDaC CDM is available at https://dev.ardac.org/DD.


Asunto(s)
Elementos de Datos Comunes , Investigación Biomédica Traslacional , Humanos , Difusión de la Información
5.
JAMIA Open ; 6(4): ooad092, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37942470

RESUMEN

Objectives: Substance misuse is a complex and heterogeneous set of conditions associated with high mortality and regional/demographic variations. Existing data systems are siloed and have been ineffective in curtailing the substance misuse epidemic. Therefore, we aimed to build a novel informatics platform, the Substance Misuse Data Commons (SMDC), by integrating multiple data modalities to provide a unified record of information crucial to improving outcomes in substance misuse patients. Materials and Methods: The SMDC was created by linking electronic health record (EHR) data from adult cases of substance (alcohol, opioid, nonopioid drug) misuse at the University of Wisconsin hospitals to socioeconomic and state agency data. To ensure private and secure data exchange, Privacy-Preserving Record Linkage (PPRL) and Honest Broker services were utilized. The overlap in mortality reporting among the EHR, state Vital Statistics, and a commercial national data source was assessed. Results: The SMDC included data from 36 522 patients experiencing 62 594 healthcare encounters. Over half of patients were linked to the statewide ambulance database and prescription drug monitoring program. Chronic diseases accounted for most underlying causes of death, while drug-related overdoses constituted 8%. Our analysis of mortality revealed a 49.1% overlap across the 3 data sources. Nonoverlapping deaths were associated with poor socioeconomic indicators. Discussion: Through PPRL, the SMDC enabled the longitudinal integration of multimodal data. Combining death data from local, state, and national sources enhanced mortality tracking and exposed disparities. Conclusion: The SMDC provides a comprehensive resource for clinical providers and policymakers to inform interventions targeting substance misuse-related hospitalizations, overdoses, and death.

6.
BMC Med Inform Decis Mak ; 23(1): 238, 2023 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-37880712

RESUMEN

BACKGROUND: Online questionnaires are commonly used to collect information from participants in epidemiological studies. This requires building questionnaires using machine-readable formats that can be delivered to study participants using web-based technologies such as progressive web applications. However, the paucity of open-source markup standards with support for complex logic make collaborative development of web-based questionnaire modules difficult. This often prevents interoperability and reusability of questionnaire modules across epidemiological studies. RESULTS: We developed an open-source markup language for presentation of questionnaire content and logic, Quest, within a real-time renderer that enables the user to test logic (e.g., skip patterns) and view the structure of data collection. We provide the Quest markup language, an in-browser markup rendering tool, questionnaire development tool and an example web application that embeds the renderer, developed for The Connect for Cancer Prevention Study. CONCLUSION: A markup language can specify both the content and logic of a questionnaire as plain text. Questionnaire markup, such as Quest, can become a standard format for storing questionnaires or sharing questionnaires across the web. Quest is a step towards generation of FAIR data in epidemiological studies by facilitating reusability of questionnaires and data interoperability using open-source tools.


Asunto(s)
Programas Informáticos , Humanos , Encuestas y Cuestionarios , Estudios Epidemiológicos
7.
J Allergy Clin Immunol Pract ; 11(4): 1063-1067, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36796512

RESUMEN

Food allergy is a significant health problem affecting approximately 8% of children and 11% of adults in the United States. It exhibits all the characteristics of a "complex" genetic trait; therefore, it is necessary to look at very large numbers of patients, far more than exist at any single organization, to eliminate gaps in the current understanding of this complex chronic disorder. Advances may be achieved by bringing together food allergy data from large numbers of patients into a Data Commons, a secure and efficient platform for researchers, comprising standardized data, available in a common interface for download and/or analysis, in accordance with the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. Prior data commons initiatives indicate that research community consensus and support, formal food allergy ontology, data standards, an accepted platform and data management tools, an agreed upon infrastructure, and trusted governance are the foundation of any successful data commons. In this article, we will present the justification for the creation of a food allergy data commons and describe the core principles that can make it successful and sustainable.


Asunto(s)
Recolección de Datos , Hipersensibilidad a los Alimentos , Humanos , Hipersensibilidad a los Alimentos/epidemiología , Estados Unidos/epidemiología , Difusión de la Información , Bases de Datos como Asunto , Recolección de Datos/normas
8.
J Clin Transl Sci ; 7(1): e255, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38229897

RESUMEN

Background/Objective: Non-clinical aspects of life, such as social, environmental, behavioral, psychological, and economic factors, what we call the sociome, play significant roles in shaping patient health and health outcomes. This paper introduces the Sociome Data Commons (SDC), a new research platform that enables large-scale data analysis for investigating such factors. Methods: This platform focuses on "hyper-local" data, i.e., at the neighborhood or point level, a geospatial scale of data not adequately considered in existing tools and projects. We enumerate key insights gained regarding data quality standards, data governance, and organizational structure for long-term project sustainability. A pilot use case investigating sociome factors associated with asthma exacerbations in children residing on the South Side of Chicago used machine learning and six SDC datasets. Results: The pilot use case reveals one dominant spatial cluster for asthma exacerbations and important roles of housing conditions and cost, proximity to Superfund pollution sites, urban flooding, violent crime, lack of insurance, and a poverty index. Conclusion: The SDC has been purposefully designed to support and encourage extension of the platform into new data sets as well as the continued development, refinement, and adoption of standards for dataset quality, dataset inclusion, metadata annotation, and data access/governance. The asthma pilot has served as the first driver use case and demonstrates promise for future investigation into the sociome and clinical outcomes. Additional projects will be selected, in part for their ability to exercise and grow the capacity of the SDC to meet its ambitious goals.

9.
G3 (Bethesda) ; 12(12)2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36214621

RESUMEN

The functionally diverse members of the human Transforming Growth Factor-ß (TGF-ß) family are tightly regulated. TGF-ß regulation includes 2 disulfide-dependent mechanisms-dimerization and partner protein binding. The specific cysteines participating in these regulatory mechanisms are known in just 3 of the 33 human TGF-ß proteins. Human prodomain alignments revealed that 24 TGF-ß prodomains contain conserved cysteines in 2 highly exposed locations. There are 3 in the region of the ß8 helix that mediates dimerization near the prodomain carboxy terminus. There are 2 in the Association region that mediates partner protein binding near the prodomain amino terminus. The alignments predict the specific cysteines contributing to disulfide-dependent regulation of 72% of human TGF-ß proteins. Database mining then identified 9 conserved prodomain cysteine mutations and their disease phenotypes in 7 TGF-ß proteins. Three common adenoma phenotypes for prodomain cysteine mutations suggested 7 new regulatory heterodimer pairs. Two common adenoma phenotypes for prodomain and binding partner cysteine mutations revealed 17 new regulatory interactions. Overall, the analysis of human TGF-ß prodomains suggests a significantly expanded scope of disulfide-dependent regulation by heterodimerization and partner protein binding; regulation that is often lost in tumors.


Asunto(s)
Neoplasias , Factor de Crecimiento Transformador beta , Humanos , Factor de Crecimiento Transformador beta/metabolismo , Cisteína , Disulfuros , Unión Proteica , Neoplasias/genética
10.
J Integr Bioinform ; 19(4)2022 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-36065132

RESUMEN

Over the last years it has been observed that the progress in data collection in life science has created increasing demand and opportunities for advanced bioinformatics. This includes data management as well as the individual data analysis and often covers the entire data life cycle. A variety of tools have been developed to store, share, or reuse the data produced in the different domains such as genotyping. Especially imputation, as a subfield of genotyping, requires good Research Data Management (RDM) strategies to enable use and re-use of genotypic data. To aim for sustainable software, it is necessary to develop tools and surrounding ecosystems, which are reusable and maintainable. Reusability in the context of streamlined tools can e.g. be achieved by standardizing the input and output of the different tools and adapting to open and broadly used file formats. By using such established file formats, the tools can also be connected with others, improving the overall interoperability of the software. Finally, it is important to build strong communities that maintain the tools by developing and contributing new features and maintenance updates. In this article, concepts for this will be presented for an imputation service.


Asunto(s)
Biología Computacional , Ecosistema , Genotipo , Programas Informáticos
11.
SSM Qual Res Health ; 2: 100158, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36092769

RESUMEN

The sudden and dramatic advent of the COVID-19 pandemic led to urgent demands for timely, relevant, yet rigorous research. This paper discusses the origin, design, and execution of the SolPan research commons, a large-scale, international, comparative, qualitative research project that sought to respond to the need for knowledge among researchers and policymakers in times of crisis. The form of organization as a research commons is characterized by an underlying solidaristic attitude of its members and its intrinsic organizational features in which research data and knowledge in the study is shared and jointly owned. As such, the project is peer-governed, rooted in (idealist) social values of academia, and aims at providing tools and benefits for its members. In this paper, we discuss challenges and solutions for qualitative studies that seek to operate as research commons.

12.
BMC Bioinformatics ; 23(Suppl 12): 386, 2022 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-36151511

RESUMEN

BACKGROUND: Public Data Commons (PDC) have been highlighted in the scientific literature for their capacity to collect and harmonize big data. On the other hand, local data commons (LDC), located within an institution or organization, have been underrepresented in the scientific literature, even though they are a critical part of research infrastructure. Being closest to the sources of data, LDCs provide the ability to collect and maintain the most up-to-date, high-quality data within an organization, closest to the sources of the data. As a data provider, LDCs have many challenges in both collecting and standardizing data, moreover, as a consumer of PDC, they face problems of data harmonization stemming from the monolithic harmonization pipeline designs commonly adapted by many PDCs. Unfortunately, existing guidelines and resources for building and maintaining data commons exclusively focus on PDC and provide very little information on LDC. RESULTS: This article focuses on four important observations. First, there are three different types of LDC service models that are defined based on their roles and requirements. These can be used as guidelines for building new LDC or enhancing the services of existing LDC. Second, the seven core services of LDC are discussed, including cohort identification and facilitation of genomic sequencing, the management of molecular reports and associated infrastructure, quality control, data harmonization, data integration, data sharing, and data access control. Third, instead of commonly developed monolithic systems, we propose a new data sharing method for data harmonization that combines both divide-and-conquer and bottom-up approaches. Finally, an end-to-end LDC implementation is introduced with real-world examples. CONCLUSIONS: Although LDCs are an optimal place to identify and address data quality issues, they have traditionally been relegated to the role of passive data provider for much larger PDC. Indeed, many LDCs limit their functions to only conducting routine data storage and transmission tasks due to a lack of information on how to design, develop, and improve their services using limited resources. We hope that this work will be the first small step in raising awareness among the LDCs of their expanded utility and to publicize to a wider audience the importance of LDC.


Asunto(s)
Macrodatos , Difusión de la Información , Países en Desarrollo , Humanos
14.
Neurotrauma Rep ; 3(1): 139-157, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35403104

RESUMEN

Traumatic brain injury (TBI) is a major public health problem. Despite considerable research deciphering injury pathophysiology, precision therapies remain elusive. Here, we present large-scale data sharing and machine intelligence approaches to leverage TBI complexity. The Open Data Commons for TBI (ODC-TBI) is a community-centered repository emphasizing Findable, Accessible, Interoperable, and Reusable data sharing and publication with persistent identifiers. Importantly, the ODC-TBI implements data sharing of individual subject data, enabling pooling for high-sample-size, feature-rich data sets for machine learning analytics. We demonstrate pooled ODC-TBI data analyses, starting with descriptive analytics of subject-level data from 11 previously published articles (N = 1250 subjects) representing six distinct pre-clinical TBI models. Second, we perform unsupervised machine learning on multi-cohort data to identify persistent inflammatory patterns across different studies, improving experimental sensitivity for pro- versus anti-inflammation effects. As funders and journals increasingly mandate open data practices, ODC-TBI will create new scientific opportunities for researchers and facilitate multi-data-set, multi-dimensional analytics toward effective translation.

15.
J Law Biosci ; 9(1): lsac005, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35382430

RESUMEN

As the adoption of digital health accelerates health research increasingly relies on large quantities of biomedical data. Research institutions scattered across a large number of jurisdictions collaborate in producing and analyzing biomedical big data. National data protection legislation, for its part, grows increasingly complex and localized. To respond to heterogeneous legal requirements arising in numerous jurisdictions, decentralized health consortia must develop scalable organizational and 6 technological arrangements that enable data flows across jurisdictional boundaries. In this article, proposals are made to enable health sector organisations to align established biomedical ethics process and data analysis practices to shifting data protection norms through both public law co-regulation, private law tools, and design-oriented approaches.

16.
J Allergy Clin Immunol Pract ; 10(6): 1614-1621.e1, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35259539

RESUMEN

BACKGROUND: Food allergy (FA) data lacks a common base of terminology and hinders data exchange among institutions. OBJECTIVE: To examine the current FA concept coverage by clinical terminologies and to develop and evaluate a Food Allergy Data Dictionary (FADD). METHODS: Allergy/immunology templates and patient intake forms from 4 academic medical centers with expertise in FA were systematically reviewed, and in-depth discussions with a panel of FA experts were conducted to identify important FA clinical concepts and data elements. The candidate ontology was iteratively refined through a series of virtual meetings. The concepts were mapped to existing clinical terminologies manually with the ATHENA vocabulary browser. Finally, the revised dictionary document was vetted with experts across 22 academic FA centers and 3 industry partners. RESULTS: A consensus version 1.0 FADD was finalized in November 2020. The FADD v1.0 contained 936 discrete FA concepts that were grouped into 14 categories. The categories included both FA-specific concepts, such as foods triggering reactions, and general health care categories, such as medications. Although many FA concepts are included in existing clinical terminologies, some critical concepts are missing. CONCLUSIONS: The FADD provides a pragmatic tool that can enable improved structured coding of FA data for both research and clinical uses, as well as lay the foundation for the development of standardized FA structured data entry forms.


Asunto(s)
Hipersensibilidad a los Alimentos , Vocabulario Controlado , Centros Médicos Académicos , Alimentos/efectos adversos , Hipersensibilidad a los Alimentos/epidemiología , Humanos
17.
J Am Med Inform Assoc ; 29(4): 619-625, 2022 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-35289369

RESUMEN

OBJECTIVE: The objective was to develop and operate a cloud-based federated system for managing, analyzing, and sharing patient data for research purposes, while allowing each resource sharing patient data to operate their component based upon their own governance rules. The federated system is called the Biomedical Research Hub (BRH). MATERIALS AND METHODS: The BRH is a cloud-based federated system built over a core set of software services called framework services. BRH framework services include authentication and authorization, services for generating and assessing findable, accessible, interoperable, and reusable (FAIR) data, and services for importing and exporting bulk clinical data. The BRH includes data resources providing data operated by different entities and workspaces that can access and analyze data from one or more of the data resources in the BRH. RESULTS: The BRH contains multiple data commons that in aggregate provide access to over 6 PB of research data from over 400 000 research participants. DISCUSSION AND CONCLUSION: With the growing acceptance of using public cloud computing platforms for biomedical research, and the growing use of opaque persistent digital identifiers for datasets, data objects, and other entities, there is now a foundation for systems that federate data from multiple independently operated data resources that expose FAIR application programming interfaces, each using a separate data model. Applications can be built that access data from one or more of the data resources.


Asunto(s)
Investigación Biomédica , Nube Computacional , Humanos , Programas Informáticos
18.
J Am Med Inform Assoc ; 29(4): 631-642, 2022 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-34850002

RESUMEN

OBJECTIVE: The integrated Translational Health Research Institute of Virginia (iTHRIV) aims to develop an information architecture to support data workflows throughout the research lifecycle for cross-state teams of translational researchers. MATERIALS AND METHODS: The iTHRIV Commons is a cross-state harmonized infrastructure supporting resource discovery, targeted consultations, and research data workflows. As the front end to the iTHRIV Commons, the iTHRIV Research Concierge Portal supports federated login, personalized views, and secure interactions with objects in the ITHRIV Commons federation. The canonical use-case for the iTHRIV Commons involves an authenticated user connected to their respective high-security institutional network, accessing the iTHRIV Research Concierge Portal web application on their browser, and interfacing with multi-component iTHRIV Commons Landing Services installed behind the firewall at each participating institution. RESULTS: The iTHRIV Commons provides a technical framework, including both hardware and software resources located in the cloud and across partner institutions, that establishes standard representation of research objects, and applies local data governance rules to enable access to resources from a variety of stakeholders, both contributing and consuming. DISCUSSION: The launch of the Commons API service at partner sites and the addition of a public view of nonrestricted objects will remove barriers to data access for cross-state research teams while supporting compliance and the secure use of data. CONCLUSIONS: The secure architecture, distributed APIs, and harmonized metadata of the iTHRIV Commons provide a methodology for compliant information and data sharing that can advance research productivity at Hub sites across the CTSA network.


Asunto(s)
Programas Informáticos , Investigación Biomédica Traslacional , Difusión de la Información , Flujo de Trabajo
19.
J Transl Med ; 19(1): 493, 2021 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-34863191

RESUMEN

BACKGROUND: To drive translational medicine, modern day biobanks need to integrate with other sources of data (clinical, genomics) to support novel data-intensive research. Currently, vast amounts of research and clinical data remain in silos, held and managed by individual researchers, operating under different standards and governance structures; a framework that impedes sharing and effective use of data. In this article, we describe the journey of British Columbia's Gynecological Cancer Research Program (OVCARE) in moving a traditional tumour biobank, outcomes unit, and a collection of data silos, into an integrated data commons to support data standardization and resource sharing under collaborative governance, as a means of providing the gynecologic cancer research community in British Columbia access to tissue samples and associated clinical and molecular data from thousands of patients. RESULTS: Through several engagements with stakeholders from various research institutions within our research community, we identified priorities and assessed infrastructure needs required to optimize and support data collections, storage and sharing, under three main research domains: (1) biospecimen collections, (2) molecular and genomics data, and (3) clinical data. We further built a governance model and a resource portal to implement protocols and standard operating procedures for seamless collections, management and governance of interoperable data, making genomic, and clinical data available to the broader research community. CONCLUSIONS: Proper infrastructures for data collection, sharing and governance is a translational research imperative. We have consolidated our data holdings into a data commons, along with standardized operating procedures to meet research and ethics requirements of the gynecologic cancer community in British Columbia. The developed infrastructure brings together, diverse data, computing frameworks, as well as tools and applications for managing, analyzing, and sharing data. Our data commons bridges data access gaps and barriers to precision medicine and approaches for diagnostics, treatment and prevention of gynecological cancers, by providing access to large datasets required for data-intensive science.


Asunto(s)
Bancos de Muestras Biológicas , Ciencia Traslacional Biomédica , Femenino , Genoma , Genómica , Humanos , Investigación Biomédica Traslacional
20.
Methods Mol Biol ; 2195: 263-275, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-32852769

RESUMEN

Germ cell tumors (GCTs) are a rare disease, but they account for 15% of all malignancies diagnosed during adolescence. The biological mechanisms underpinning their development are only starting to be explored. Current GCT treatment may be associated with significant toxicity. Therefore, there is an urgent need to understand the molecular basis of GCT and identify biomarkers to tailor the therapy for individual patients. However, this research is severely hamstrung by the rarity of GCTs in individual hospitals/institutes. A publicly available genomic data commons with GCT datasets compiled from different institutes/studies would be a valuable resource to facilitate such research. In this study, we first reviewed publicly available web portals containing GCT genomics data, focusing on comparing data availability, data access, and analysis tools, and the limitations of using these resources for GCT molecular studies. Next, we specifically designed a GCT data commons with a web portal, GCT Explorer, to assist the research community to store, manage, search, share, and analyze data. The goal of this work is to facilitate GCT molecular basis exploration and translational research.


Asunto(s)
Biología Computacional , Bases de Datos Genéticas , Susceptibilidad a Enfermedades , Neoplasias de Células Germinales y Embrionarias/etiología , Biología Computacional/métodos , Seguridad Computacional , Estudios de Asociación Genética/métodos , Genoma , Genómica/métodos , Humanos , Neoplasias de Células Germinales y Embrionarias/metabolismo , Neoplasias de Células Germinales y Embrionarias/patología , Fenotipo , Interfaz Usuario-Computador , Navegador Web
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA