Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 200
Filtrar
1.
Heliyon ; 10(17): e37090, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39286198

RESUMEN

To explore the effect of interchange spacing on drivers' visual characteristics in the merging areas of interchange, a high-density group of five interchanges on the expressway of Chongqing, China, was selected as the test site. An naturalistic driving test was conducted with 47 participants, and the Tobii Glasses II portable eye tracker was used to collect gaze data during driving. The drivers' fixation field was divided into six areas by applying a K-means dynamic clustering algorithm combined with the actual scenario. Markov chains were used to calculate the drivers' gaze transition probability matrices under different driving conditions, and the analysis of gaze transition behaviors was directed at common spacing interchanges, small spacing interchanges, and composite interchanges. Under the ramp-mainline condition, drivers' fixations were primarily concentrated on the near ahead and the left side areas, with higher rates of repeated fixations on the left rearview mirror and left-side line areas. The average value of fixation duration, saccade distance, and saccade speed of small spacing interchange is higher than common spacing interchange. Additionally, under the mainline condition, the probability of one-step transition and repeated fixation rates significantly increased for the right-side lane areas, and the average values of fixation index and saccade index of small spacing interchange are lower than those of common spacing interchange. The results show that the highest probabilities of repeated fixation by drivers occurred in the near ahead and far ahead areas in the interchange merging areas. Insufficient spacing resulted in more frequent occurrences of zero values in one-step transition probability matrices. The research conclusions provide theoretical support for the optimal design and safe operation of the merging area of high-density interchange group of urban expressway.

2.
Stud Health Technol Inform ; 316: 771-775, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176907

RESUMEN

Ontologies play a key role in representing and structuring domain knowledge. In the biomedical domain, the need for this type of representation is crucial for structuring, coding, and retrieving data. However, available ontologies do not encompass all the relevant concepts and relationships. In this paper, we propose the framework SiMHOMer (Siamese Models for Health Ontologies Merging) to semantically merge and integrate the most relevant ontologies in the healthcare domain, with a first focus on diseases, symptoms, drugs, and adverse events. We propose to rely on the siamese neural models we developed and trained on biomedical data, BioSTransformers, to identify new relevant relations between concepts and to create new semantic relations, the objective being to build a new merging ontology that could be used in applications. To validate the proposed approach and the new relations, we relied on the UMLS Metathesaurus and the Semantic Network. Our first results show promising improvements for future research.


Asunto(s)
Ontologías Biológicas , Semántica , Redes Neurales de la Computación , Humanos , Unified Medical Language System
3.
MethodsX ; 13: 102871, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39157813

RESUMEN

OCT imaging is an important technique to study fouling in spacer-filled channels of reverse osmosis systems for seawater desalination. However, OCT imaging of membrane filtration channels with feed spacers is challenging because the spacer material can be (partly) transparent, making it difficult to detect and possibly mistaken for fouling, and the longer optical pathway through the spacer material distorts the image below the spacer. This study presents an automated 3D OCT image processing method in MATLAB for visualization and quantification of biofouling in spacer-filled channels. First, a spacer template of arbitrary size and rotation was generated from a CT scan of the feed spacer. Second, background noise and file size were reduced by representing the OCT image with a list of discrete reflectors. Finally, the spacer template was overlayed with the feed spacer in the 3D OCT image, enabling automated visualization of the feed spacer and correction of the distortions. Moreover, the method allows the selection of datasets with the same location relative to the position of the spacer, enabling systematic comparison between datasets and quantitative analysis.•A spacer template of arbitrary size and rotation was generated from a CT scan.•The background noise was removed, and the file size was reduced by representing the OCT dataset with a list of discrete reflectors.•The spacer template was overlayed with the feed spacer in the 3D OCT image.

4.
Int J Comput Dent ; 0(0): 0, 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39073119

RESUMEN

AIM: To report on a novel digital superimposition workflow that enables measuring the supra-crestal peri-implant soft tissue dimensions all along implant treatment and afterwards. MATERIALS AND METHODS: A preoperative CBCT and intra-oral scans (IOS) are successively taken before surgery, at the end of the healing period, at prosthesis delivery, and over time; they are digitally superposed on a dedicated software. Then, the stereolithography files (STL) of the healing abutment, of the prosthetic abutment and the crown are successively merged into the superposition set of IOSs. RESULT: The workflow protocol of merging successively the STL of each item into the superposition set of IOSs enables capturing the dimensions of the height and width of the supra-crestal soft tissues, at every level of the healing abutment, the prosthetic abutment and the crown. In addition, it allows measuring the vertical distance that the crown exerts pressure on the gingiva and the thickness of the papillae at every level of the abutment. CONCLUSION: This novel digital superimposition workflow provides a straightforward method of measuring the vertical and horizontal dimensions of the supra-crestal peri-implant soft tissues, including the papillae, at each stage of the implant treatment process. It allows investigating a certain number of soft tissue variables that were previously inaccessible to clinical research. It should help enhancing our comprehension of the peri-implant soft tissue dynamics.

5.
Biotechnol Lett ; 46(5): 791-805, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38970710

RESUMEN

The pernicious nature of low-quality sequencing data warrants improvement in the bioinformatics workflow for profiling microbial diversity. The conventional merging approach, which drops a copious amount of sequencing reads when processing low-quality amplicon data, requires alternative methods. In this study, a computational workflow, a combination of merging and direct-joining where the paired-end reads lacking overlaps are concatenated and pooled with the merged sequences, is proposed to handle the low-quality amplicon data. The proposed computational strategy was compared with two workflows; the merging approach where the paired-end reads are merged, and the direct-joining approach where the reads are concatenated. The results showed that the merging approach generates a significantly low number of amplicon sequences, limits the microbiome inference, and obscures some microbial associations. In comparison to other workflows, the combination of merging and direct-joining strategy reduces the loss of amplicon data, improves the taxonomy classification, and importantly, abates the misleading results associated with the merging approach when analysing the low-quality amplicon data. The mock community analysis also supports the findings. In summary, the researchers are suggested to follow the merging and direct-joining workflow to avoid problems associated with low-quality data while profiling the microbial community structure.


Asunto(s)
Biología Computacional , Microbiota , Microbiota/genética , Biología Computacional/métodos , Secuenciación de Nucleótidos de Alto Rendimiento/métodos , Análisis de Secuencia de ADN/métodos , Flujo de Trabajo , Bacterias/genética , Bacterias/clasificación
6.
Curr Protoc ; 4(6): e1055, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38837690

RESUMEN

Data harmonization involves combining data from multiple independent sources and processing the data to produce one uniform dataset. Merging separate genotypes or whole-genome sequencing datasets has been proposed as a strategy to increase the statistical power of association tests by increasing the effective sample size. However, data harmonization is not a widely adopted strategy due to the difficulties with merging data (including confounding produced by batch effects and population stratification). Detailed data harmonization protocols are scarce and are often conflicting. Moreover, data harmonization protocols that accommodate samples of admixed ancestry are practically non-existent. Existing data harmonization procedures must be modified to ensure the heterogeneous ancestry of admixed individuals is incorporated into additional downstream analyses without confounding results. Here, we propose a set of guidelines for merging multi-platform genetic data from admixed samples that can be adopted by any investigator with elementary bioinformatics experience. We have applied these guidelines to aggregate 1544 tuberculosis (TB) case-control samples from six separate in-house datasets and conducted a genome-wide association study (GWAS) of TB susceptibility. The GWAS performed on the merged dataset had improved power over analyzing the datasets individually and produced summary statistics free from bias introduced by batch effects and population stratification. © 2024 Wiley Periodicals LLC. Basic Protocol 1: Processing separate datasets comprising array genotype data Alternate Protocol 1: Processing separate datasets comprising array genotype and whole-genome sequencing data Alternate Protocol 2: Performing imputation using a local reference panel Basic Protocol 2: Merging separate datasets Basic Protocol 3: Ancestry inference using ADMIXTURE and RFMix Basic Protocol 4: Batch effect correction using pseudo-case-control comparisons.


Asunto(s)
Estudio de Asociación del Genoma Completo , Humanos , Estudio de Asociación del Genoma Completo/métodos , Estudio de Asociación del Genoma Completo/normas , Genómica/métodos , Genómica/normas , Tuberculosis/genética , Estudios de Casos y Controles , Guías como Asunto , Predisposición Genética a la Enfermedad
7.
Camb Q Healthc Ethics ; : 1-13, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38606432

RESUMEN

Advances in brain-brain interface technologies raise the possibility that two or more individuals could directly link their minds, sharing thoughts, emotions, and sensory experiences. This paper explores conceptual and ethical issues posed by such mind-merging technologies in the context of clinical neuroethics. Using hypothetical examples along a spectrum from loosely connected pairs to fully merged minds, the authors sketch out a range of factors relevant to identifying the degree of a merger. They then consider potential new harms like loss of identity, psychological domination, loss of mental privacy, and challenges for notions of autonomy and patient benefit when applied to merged minds. While radical technologies may seem to necessitate new ethical paradigms, the authors suggest the individual-focus underpinning clinical ethics can largely accommodate varying degrees of mind mergers so long as individual patient interests remain identifiable. However, advanced decisionmaking and directives may have limitations in addressing the dilemmas posed. Overall, mind-merging possibilities amplify existing challenges around loss of identity, relating to others, autonomy, privacy, and the delineation of patient interests. This paper lays the groundwork for developing resources to address the novel issues raised, while suggesting the technologies reveal continuity with current healthcare ethics tensions.

8.
Nanotechnology ; 35(30)2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38631322

RESUMEN

The growth kinetics of colloidal lead halide perovskite nanomaterials are an integral part of their applications, remains poorly understood due to complex nucleation processes and lack ofin situsize monitoring method. Here we demonstrated that absorption spectra can be used to observein situgrowth processes of ultrathin CsPbBr3nanowires in solution with reference to the effective mass infinite deep square potential well model. By means of this method, we have found that the ultrathin nanowires, fabricated by hot injection method, were firstly formed within one minute. Subsequently, they merge with each other into a thicker structure with increasing reaction time. We revealed that the nucleation, growth, and merging of the CsPbBr3nanowires are determined by the acid concentration and ligand chain length. At lower acidity, the critical nucleation size of the nanowire is smaller, while the shorter the ligand chain length, the faster the merging among the nanowires. Moreover, the merging mode between nanowires changed with their nucleation size. This growth kinetics of CsPbBr3nanowires provides a reference for optimizing the synthesis conditions to obtain the one-dimensional CsPbBr3with desired size, thus enabling accurate control of the nanowire shape.

9.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38676037

RESUMEN

The aim of this paper is to discuss the effect of the sensor on the acoustic emission (AE) signature and to develop a methodology to reduce the sensor effect. Pencil leads are broken on PMMA plates at different source-sensor distances, and the resulting waves are detected with different sensors. Several transducers, commonly used for acoustic emission measurements, are compared with regard to their ability to reproduce the characteristic shapes of plate waves. Their consequences for AE descriptors are discussed. Their different responses show why similar test specimens and test conditions can yield disparate results. This sensor effect will furthermore make the classification of different AE sources more difficult. In this context, a specific procedure is proposed to reduce the sensor effect and to propose an efficient selection of descriptors for data merging. Principal Component Analysis has demonstrated that using the Z-score normalized descriptor data in conjunction with the Krustal-Wallis test and identifying the outliers can help reduce the sensor effect. This procedure leads to the selection of a common descriptor set with the same distribution for all sensors. These descriptors can be merged to create a library. This result opens up new outlooks for the generalization of acoustic emission signature libraries. This aspect is a key point for the development of a database for machine learning.

10.
PeerJ Comput Sci ; 10: e1863, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38435574

RESUMEN

This article presents a clustering effectiveness measurement model based on merging similar clusters to address the problems experienced by the affinity propagation (AP) algorithm in the clustering process, such as excessive local clustering, low accuracy, and invalid clustering evaluation results that occur due to the lack of variety in some internal evaluation indices when the proportion of clusters is very high. First, depending upon the "rough clustering" process of the AP clustering algorithm, similar clusters are merged according to the relationship between the similarity between any two clusters and the average inter-cluster similarity in the entire sample set to decrease the maximum number of clusters Kmax. Then, a new scheme is proposed to calculate intra-cluster compactness, inter-cluster relative density, and inter-cluster overlap coefficient. On the basis of this new method, several internal evaluation indices based on intra-cluster cohesion and inter-cluster dispersion are designed. Results of experiments show that the proposed model can perform clustering and classification correctly and provide accurate ranges for clustering using public UCI and NSL-KDD datasets, and it is significantly superior to the three improved clustering algorithms compared with it in terms of intrusion detection indices such as detection rate and false positive rate (FPR).

11.
Stat Methods Med Res ; 33(4): 574-588, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38446999

RESUMEN

In preclinical investigations, for example, in in vitro, in vivo, and in silico studies, the pharmacokinetic, pharmacodynamic, and toxicological characteristics of a drug are evaluated before advancing to first-in-man trial. Usually, each study is analyzed independently and the human dose range does not leverage the knowledge gained from all studies. Taking into account all preclinical data through inferential procedures can be particularly interesting in obtaining a more precise and reliable starting dose and dose range. Our objective is to propose a Bayesian framework for multi-source data integration, customizable, and tailored to the specific research question. We focused on preclinical results extrapolated to humans, which allowed us to predict the quantities of interest (e.g. maximum tolerated dose, etc.) in humans. We build an approach, divided into four steps, based on a sequential parameter estimation for each study, extrapolation to human, commensurability checking between posterior distributions and final information merging to increase the precision of estimation. The new framework is evaluated via an extensive simulation study, based on a real-life example in oncology. Our approach allows us to better use all the information compared to a standard framework, reducing uncertainty in the predictions and potentially leading to a more efficient dose selection.


Asunto(s)
Investigación , Humanos , Teorema de Bayes , Simulación por Computador
12.
Accid Anal Prev ; 199: 107530, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38437756

RESUMEN

Merging areas serve as the potential bottlenecks for continuous traffic flow on freeways. Traffic incidents in freeway merging areas are closely related to decision-making errors of human drivers, for which the autonomous vehicles (AVs) technologies are expected to help enhance the safety performance. However, evaluating the safety impact of AVs is challenging in practice due to the lack of real-world driving and incident data. Despite the increasing number of simulation-based AV studies, most relied on single traffic/vehicle driving simulators, which exhibit limitations such as inaccurate description of AV behavior using pre-defined driving models, limited testing modules, and a lack of high-fidelity traffic scenarios. To this end, this study addresses these challenges by customizing different types of car-following models for AVs on freeway and developing a software-in-the-loop co-simulation platform for safety performance evaluation. Specifically, the environmental perception module is integrated in PreScan, the decision-making and control model for AVs is designed by Matlab, and the traffic flow environment is established by Vissim. Such a co-simulation platform is supposed to be able to reproduce the mixed traffic with AVs to a large extent. By taking a real freeway merging scenario as an example, comprehensive experiments were conducted by introducing a single AV and multiple AVs on the mainline of freeway, respectively. The single AV experiment investigated the performance of different car-following models microscopically in the case of merging conflict. The safety and comfort of AVs were examined in terms of TTC and jerk, respectively. The multiple AVs experiment examined the safety impact of AVs on mixed traffic of freeway merging areas macroscopically using the developed risk assessment model. The results show that AVs could bring significant benefits to freeway safety, as traffic conflicts and risks are substantially reduced with incremental market penetration rates.


Asunto(s)
Vehículos Autónomos , Humanos , Accidentes de Tránsito/prevención & control , Simulación por Computador , Programas Informáticos
13.
Micromachines (Basel) ; 15(2)2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38398978

RESUMEN

Although the enormous potential of droplet-based microfluidics has been successfully demonstrated in the past two decades for medical, pharmaceutical, and academic applications, its inherent potential has not been fully exploited until now. Nevertheless, the cultivation of biological cells and 3D cell structures like spheroids and organoids, located in serially arranged droplets in micro-channels, has a range of benefits compared to established cultivation techniques based on, e.g., microplates and microchips. To exploit the enormous potential of the droplet-based cell cultivation technique, a number of basic functions have to be fulfilled. In this paper, we describe microfluidic modules to realize the following basic functions with high precision: (i) droplet generation, (ii) mixing of cell suspensions and cell culture media in the droplets, (iii) droplet content detection, and (iv) active fluid injection into serially arranged droplets. The robustness of the functionality of the Two-Fluid Probe is further investigated regarding its droplet generation using different flow rates. Advantages and disadvantages in comparison to chip-based solutions are discussed. New chip-based modules like the gradient, the piezo valve-based conditioning, the analysis, and the microscopy module are characterized in detail and their high-precision functionalities are demonstrated. These microfluidic modules are micro-machined, and as the surfaces of their micro-channels are plasma-treated, we are able to perform cell cultivation experiments using any kind of cell culture media, but without needing to use surfactants. This is even more considerable when droplets are used to investigate cell cultures like stem cells or cancer cells as cell suspensions, as 3D cell structures, or as tissue fragments over days or even weeks for versatile applications.

14.
J Oral Sci ; 66(1): 70-74, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38233158

RESUMEN

PURPOSE: To clarify the magnification error caused by the degree of tilt of the incisor and the elevation of the X-ray focus position, and the verification effect of magnification correction when performing vertical dual-exposure panoramic radiography. METHODS: Panoramic radiographic images of a phantom embedding 26 steel balls were taken at different heights (0, 5, 10, 15, and 20 mm) and tilt angles (0°, 10°, 20°, and 30°) to evaluate vertical magnification in each condition. Error and correlation coefficients in the vertical magnifications were calculated between the measured and theoretical magnification values. RESULTS: The more the steel ball phantom was tilted, the more the images of the uppermost steel balls were laterally stretched. In the vertical direction, image magnification also influenced the tilt angle of the object in the incisal region. The range of error in vertical magnification was -0.35-0.30%. The Spearman's rank correlation coefficient between the measured and theoretical magnification value was 0.983. CONCLUSION: Vertical magnification correction has the potential to improve image quality when merging panoramic radiographs in vertical dual-exposure panoramic radiography.


Asunto(s)
Incisivo , Acero , Radiografía Panorámica
15.
Chemistry ; 30(6): e202303337, 2024 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-37987541

RESUMEN

A photocatalytic domain of doubly decarboxylative Csp 2 -Csp 2 cross coupling reaction is disclosed. Merging iridium and palladium photocatalysis manifested carbon-carbon bonds in a tandem dual-radical pathway. Present catalytic platform efficiently cross-coupled α, ß-unsaturated acids and α-keto acids to afford a variety of α, ß-unsaturated ketones with excellent (E)-selectivity and functional group tolerance. Mechanistically, photocatalyst implicated through reductive quenching cycle whereas cross coupling proceeded over one electron oxidative pallado-cycle.

16.
J Oral Sci ; 66(1): 37-41, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38030284

RESUMEN

PURPOSE: To evaluate the image quality of vertical dual-exposure panoramic radiography (PR), which merges two PR images taken at different focus heights to reduce ghost images of cervical vertebrae (CV) and intervertebral spaces (IVS) in the incisor region. METHODS: PR images of an aluminum block, a CV phantom and a human head phantom were taken at 0 mm and merged with and subtracted from PR images taken at other heights (0, 5, 10, 15, and 20 mm) to create new images, e.g., Merg0 + 15 mm and Sub0 - 10 mm. The subtracted images were analyzed subjectively according to the uniformity on the line profile. Merged images were evaluated subjectively by six raters to determine the influence of the ghost images. RESULTS: Objective evaluation revealed a positional shift in the ghost images according to the height of the focus for both phantoms. In the subjective evaluation, the normal PR (Merg0 + 0 mm) showed the worst score, indicating strong influence of CV and IVS ghost images. CONCLUSION: The vertical dual-exposure PR method, which merges PR images taken at the normal position and a higher X-ray focus, can reduce CV and IVS ghost images in the incisor region.


Asunto(s)
Vértebras Cervicales , Humanos , Radiografía Panorámica/métodos , Vértebras Cervicales/diagnóstico por imagen , Fantasmas de Imagen
17.
Sci Total Environ ; 912: 169476, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38145671

RESUMEN

Realistic representation of hydrological drought events is increasingly important in world facing decreased freshwater availability. Index-based drought monitoring systems are often adopted to represent the evolution and distribution of hydrological droughts, which mainly rely on hydrological model simulations to compute these indices. Recent studies, however, indicate that model derived water storage estimates might have difficulties in adequately representing reality. Here, a novel Markov Chain Monte Carlo - Data Assimilation (MCMC-DA) approach is implemented to merge global Terrestrial Water Storage (TWS) changes from the Gravity Recovery And Climate Experiment (GRACE) and its Follow On mission (GRACE-FO) with the water storage estimations derived from the W3RA water balance model. The modified MCMC-DA derived summation of deep-rooted soil and groundwater storage estimates is then used to compute 0.5∘ standardized groundwater drought indices globally to show the impact of GRACE/GRACE-FO DA on a global index-based hydrological drought monitoring system. Our numerical assessment covers the period of 2003-2021, and shows that integrating GRACE/GRACE-FO data modifies the seasonality and inter-annual trends of water storage estimations. Considerable increases in the length and severity of extreme droughts are found in basins that exhibited multi-year water storage fluctuations and those affected by climate teleconnections.

18.
Sensors (Basel) ; 23(23)2023 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-38067754

RESUMEN

In the production process of metal industrial products, the deficiencies and limitations of existing technologies and working conditions can have adverse effects on the quality of the final products, making surface defect detection particularly crucial. However, collecting a sufficient number of samples of defective products can be challenging. Therefore, treating surface defect detection as a semi-supervised problem is appropriate. In this paper, we propose a method based on a Transformer with pruned and merged multi-scale masked feature fusion. This method learns the semantic context from normal samples. We incorporate the Vision Transformer (ViT) into a generative adversarial network to jointly learn the generation in the high-dimensional image space and the inference in the latent space. We use an encoder-decoder neural network with long skip connections to capture information between shallow and deep layers. During training and testing, we design block masks of different scales to obtain rich semantic context information. Additionally, we introduce token merging (ToMe) into the ViT to improve the training speed of the model without affecting the training results. In this paper, we focus on the problems of rust, scratches, and other defects on the metal surface. We conduct various experiments on five metal industrial product datasets and the MVTec AD dataset to demonstrate the superiority of our method.

19.
Metabolites ; 13(12)2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-38132849

RESUMEN

Metabolomics encounters challenges in cross-study comparisons due to diverse metabolite nomenclature and reporting practices. To bridge this gap, we introduce the Metabolites Merging Strategy (MMS), offering a systematic framework to harmonize multiple metabolite datasets for enhanced interstudy comparability. MMS has three steps. Step 1: Translation and merging of the different datasets by employing InChIKeys for data integration, encompassing the translation of metabolite names (if needed). Followed by Step 2: Attributes' retrieval from the InChIkey, including descriptors of name (title name from PubChem and RefMet name from Metabolomics Workbench), and chemical properties (molecular weight and molecular formula), both systematic (InChI, InChIKey, SMILES) and non-systematic identifiers (PubChem, CheBI, HMDB, KEGG, LipidMaps, DrugBank, Bin ID and CAS number), and their ontology. Finally, a meticulous three-step curation process is used to rectify disparities for conjugated base/acid compounds (optional step), missing attributes, and synonym checking (duplicated information). The MMS procedure is exemplified through a case study of urinary asthma metabolites, where MMS facilitated the identification of significant pathways hidden when no dataset merging strategy was followed. This study highlights the need for standardized and unified metabolite datasets to enhance the reproducibility and comparability of metabolomics studies.

20.
Front Bioinform ; 3: 1260486, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38131007

RESUMEN

Ancient DNA is highly degraded, resulting in very short sequences. Reads generated with modern high-throughput sequencing machines are generally longer than ancient DNA molecules, therefore the reads often contain some portion of the sequencing adaptors. It is crucial to remove those adaptors, as they can interfere with downstream analysis. Furthermore, overlapping portions when DNA has been read forward and backward (paired-end) can be merged to correct sequencing errors and improve read quality. Several tools have been developed for adapter trimming and read merging, however, no one has attempted to evaluate their accuracy and evaluate their potential impact on downstream analyses. Through the simulation of sequencing data, seven commonly used tools were analyzed in their ability to reconstruct ancient DNA sequences through read merging. The analyzed tools exhibit notable differences in their abilities to correct sequence errors and identify the correct read overlap, but the most substantial difference is observed in their ability to calculate quality scores for merged bases. Selecting the most appropriate tool for a given project depends on several factors, although some tools such as fastp have some shortcomings, whereas others like leeHom outperform the other tools in most aspects. While the choice of tool did not result in a measurable difference when analyzing population genetics using principal component analysis, it is important to note that downstream analyses that are sensitive to wrongly merged reads or that rely on quality scores can be significantly impacted by the choice of tool.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA