Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 13.049
Filtrar
1.
PLoS One ; 19(9): e0310092, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39264894

RESUMO

INTRODUCTION: The Fragility Index (FI) and the FI family are statistical tools that measure the robustness of randomized controlled trials (RCT) by examining how many patients would need a different outcome to change the statistical significance of the main results of a trial. These tools have recently gained popularity in assessing the robustness or fragility of clinical trials in many clinical areas and analyzing the strength of the trial outcomes underpinning guideline recommendations. However, it has not been applied to perioperative care Clinical Practice Guidelines (CPG). OBJECTIVES: This study aims to survey clinical practice guidelines in anesthesiology to determine the Fragility Index of RCTs supporting the recommendations, and to explore trial characteristics associated with fragility. METHODS AND ANALYSIS: A methodological survey will be conducted using the targeted population of RCT referenced in the recommendations of the CPG of the North American and European societies from 2012 to 2022. FI will be assessed for statistically significant and non-significant trial results. A Poisson regression analysis will be used to explore factors associated with fragility. DISCUSSION: This methodological survey aims to estimate the Fragility Index of RCTs supporting perioperative care guidelines published by North American and European societies of anesthesiology between 2012 and 2022. The results of this study will inform the methodological quality of RCTs included in perioperative care guidelines and identify areas for improvement.


Assuntos
Assistência Perioperatória , Ensaios Clínicos Controlados Aleatórios como Assunto , Assistência Perioperatória/normas , Assistência Perioperatória/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Humanos , Guias de Prática Clínica como Assunto , Inquéritos e Questionários , Anestesiologia/normas , Anestesiologia/métodos , Projetos de Pesquisa/normas
2.
BMC Med Res Methodol ; 24(1): 196, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251912

RESUMO

BACKGROUND: Systematic reviews and data synthesis of randomised clinical trials play a crucial role in clinical practice, research, and health policy. Trial sequential analysis can be used in systematic reviews to control type I and type II errors, but methodological errors including lack of protocols and transparency are cause for concern. We assessed the reporting of trial sequential analysis. METHODS: We searched Medline and the Cochrane Database of Systematic Reviews from 1 January 2018 to 31 December 2021 for systematic reviews and meta-analysis reports that include a trial sequential analysis. Only studies with at least two randomised clinical trials analysed in a forest plot and a trial sequential analysis were included. Two independent investigators assessed the studies. We evaluated protocolisation, reporting, and interpretation of the analyses, including their effect on any GRADE evaluation of imprecision. RESULTS: We included 270 systematic reviews and 274 meta-analysis reports and extracted data from 624 trial sequential analyses. Only 134/270 (50%) systematic reviews planned the trial sequential analysis in the protocol. For analyses on dichotomous outcomes, the proportion of events in the control group was missing in 181/439 (41%), relative risk reduction in 105/439 (24%), alpha in 30/439 (7%), beta in 128/439 (29%), and heterogeneity in 232/439 (53%). For analyses on continuous outcomes, the minimally relevant difference was missing in 125/185 (68%), variance (or standard deviation) in 144/185 (78%), alpha in 23/185 (12%), beta in 63/185 (34%), and heterogeneity in 105/185 (57%). Graphical illustration of the trial sequential analysis was present in 93% of the analyses, however, the Z-curve was wrongly displayed in 135/624 (22%) and 227/624 (36%) did not include futility boundaries. The overall transparency of all 624 analyses was very poor in 236 (38%) and poor in 173 (28%). CONCLUSIONS: The majority of trial sequential analyses are not transparent when preparing or presenting the required parameters, partly due to missing or poorly conducted protocols. This hampers interpretation, reproducibility, and validity. STUDY REGISTRATION: PROSPERO CRD42021273811.


Assuntos
Metanálise como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Revisões Sistemáticas como Assunto , Humanos , Revisões Sistemáticas como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Projetos de Pesquisa/normas
3.
Neurology ; 103(7): e209861, 2024 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-39236270

RESUMO

Machine learning (ML) methods are becoming more prevalent in the neurology literature as alternatives to traditional statistical methods to address challenges in the analysis of modern data sets. Despite the increase in the popularity of ML methods in neurology studies, some authors do not fully address all items recommended in reporting guidelines. The authors of this Research Methods article are members of the Neurology® editorial board and have reviewed many studies using ML methods. In their review reports, several critiques often appear, which could be avoided if guidance were available. In this article, we detail common critiques found in ML research studies and make recommendations for how to avoid them. The first critique involves misalignment of the study goals and the analysis conducted. The second critique focuses on ML terminology being appropriately used. Critiques 3-6 are related to the study design: justifying sample sizes and the suitability of the data set for the study goals, describing the ML analysis pipeline sufficiently, quantifying the amount of missing data and providing information about missing data handling, and including uncertainty estimates for key metrics. The seventh critique focuses on fairly describing both strengths and limitations of the ML study, including the analysis methodology and results. We provide examples in neurology for each critique and guidance on how to avoid the critique. Overall, we recommend that authors use ML-specific checklists developed by research consortia for designing and reporting studies using ML. We also recommend that authors involve both a statistician and an ML expert in work that uses ML. Although our list of critiques is not exhaustive, our recommendations should help improve the quality and rigor of ML studies. ML has great potential to revolutionize neurology, but investigators need to conduct and report the results in a way that allows readers to fully evaluate the benefits and limitations of ML approaches.


Assuntos
Aprendizado de Máquina , Neurologia , Humanos , Pesquisa Biomédica/normas , Pesquisa Biomédica/métodos , Neurologia/normas , Neurologia/métodos , Projetos de Pesquisa/normas
4.
Medicine (Baltimore) ; 103(39): e39933, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39331860

RESUMO

Although Systematic Reviews and Meta-Analyses (PRISMA) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Acupuncture (PRISMA-A) checklists had been in use for several years, compliance rate was still not optimistic. We investigated the quality of reporting for meta-analyses of acupuncture published in PubMed. We compared the compliance rate for the quality of reporting following the publication of both the PRISMA and PRISMA-A recommendations. We searched PubMed for articles published between January 1st, 2020 and December 31st, 2022, after Endnote X9 document management software and manual screening, 180 meta-analyses of acupuncture were selected as samples. The PRISMA, and PRISMA-A checklists were used to evaluate the quality of the literature. Data were collected using a standard form. Pearson χ2 test and/or Fisher exact test were used to assess differences in reporting among groups. Logistic regression is used to calculate OR and its 95% CI. The total reported compliance rate of all items in the PRISMA list was 61.3%, and the reported compliance rate of the items with a compliance rate of <50% accounted for 35.71% of the total items. The total reported coincidence rate of all items in the PRISMA-A was 56.9%, and the reported coincidence rate of the items with a reported coincidence rate of <50% accounted for 31.25% of all the items. The compliance rate of the published research to PRISMA or PRISMA-A has no statistical difference between the Journal Citation Reports partition (Quarter1-Quarter2) and Journal Citation Reports partition (Quarter3-Qurater4) (P > .05). Regardless of the level of journals published, have obvious deficiencies in the details of the study, the reference basis for the design of the study, the analysis method, the degree of strictness, the scientific nature, and other aspects. We must strengthen education on the standardization of research reports.


Assuntos
Terapia por Acupuntura , Fidelidade a Diretrizes , Metanálise como Assunto , Humanos , Terapia por Acupuntura/normas , Terapia por Acupuntura/métodos , Fidelidade a Diretrizes/estatística & dados numéricos , Lista de Checagem , Revisões Sistemáticas como Assunto , Projetos de Pesquisa/normas
5.
JMIR Res Protoc ; 13: e58202, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39293047

RESUMO

BACKGROUND: Quality assessment of diagnostic accuracy studies (QUADAS), and more recently QUADAS-2, were developed to aid the evaluation of methodological quality within primary diagnostic accuracy studies. However, its current form, QUADAS-2 does not address the unique considerations raised by artificial intelligence (AI)-centered diagnostic systems. The rapid progression of the AI diagnostics field mandates suitable quality assessment tools to determine the risk of bias and applicability, and subsequently evaluate translational potential for clinical practice. OBJECTIVE: We aim to develop an AI-specific QUADAS (QUADAS-AI) tool that addresses the specific challenges associated with the appraisal of AI diagnostic accuracy studies. This paper describes the processes and methods that will be used to develop QUADAS-AI. METHODS: The development of QUADAS-AI can be distilled into 3 broad stages. Stage 1-a project organization phase had been undertaken, during which a project team and a steering committee were established. The steering committee consists of a panel of international experts representing diverse stakeholder groups. Following this, the scope of the project was finalized. Stage 2-an item generation process will be completed following (1) a mapping review, (2) a meta-research study, (3) a scoping survey of international experts, and (4) a patient and public involvement and engagement exercise. Candidate items will then be put forward to the international Delphi panel to achieve consensus for inclusion in the revised tool. A modified Delphi consensus methodology involving multiple online rounds and a final consensus meeting will be carried out to refine the tool, following which the initial QUADAS-AI tool will be drafted. A piloting phase will be carried out to identify components that are considered to be either ambiguous or missing. Stage 3-once the steering committee has finalized the QUADAS-AI tool, specific dissemination strategies will be aimed toward academic, policy, regulatory, industry, and public stakeholders, respectively. RESULTS: As of July 2024, the project organization phase, as well as the mapping review and meta-research study, have been completed. We aim to complete the item generation, including the Delphi consensus, and finalize the tool by the end of 2024. Therefore, QUADAS-AI will be able to provide a consensus-derived platform upon which stakeholders may systematically appraise the methodological quality associated with AI diagnostic accuracy studies by the beginning of 2025. CONCLUSIONS: AI-driven systems comprise an increasingly significant proportion of research in clinical diagnostics. Through this process, QUADAS-AI will aid the evaluation of studies in this domain in order to identify bias and applicability concerns. As such, QUADAS-AI may form a key part of clinical, governmental, and regulatory evaluation frameworks for AI diagnostic systems globally. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/58202.


Assuntos
Inteligência Artificial , Pesquisa Qualitativa , Humanos , Projetos de Pesquisa/normas , Garantia da Qualidade dos Cuidados de Saúde/métodos , Técnica Delphi
10.
Syst Rev ; 13(1): 244, 2024 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-39342302

RESUMO

BACKGROUND: Meta-epidemiological research plays a vital role in providing empirical evidence needed to develop methodological manuals and tools, but the reporting quality has not been comprehensively assessed, and the influence of reporting guidelines remains unclear. The current study aims to evaluate the reporting quality of meta-epidemiological studies, assess the impact of reporting guidelines, and identify factors influencing reporting quality. METHODS: We searched PubMed and Embase for meta-epidemiological studies. The reporting quality of these studies was assessed for adherence to established reporting guidelines. Two researchers independently screened the studies and assessed the quality of the included studies. Time-series segmented linear regression was used to evaluate changes in reporting quality over time, while beta-regression analysis was performed to identify factors significantly associated with reporting quality. RESULTS: We initially identified 1720 articles, of which 125 meta-epidemiological studies met the inclusion criteria. Of these, 65 (52%) had low reporting quality, 60 (48%) had moderate quality, and none achieved high quality. Of the 24 items derived from established reporting guidelines, 4 had poor adherence, 13 had moderate adherence, and 7 had high adherences. High journal impact factor (≥ 10) (OR = 1.42, 95% CI: 1.13, 1.80; P = 0.003) and protocol registration (OR = 1.70, 95% CI: 1.30, 2.22; P < 0.001) were significantly associated with better reporting quality. The publication of the reporting guideline did not significantly increase the mean reporting quality score (- 0.53, 95% CI: - 3.37, 2.31; P = 0.67) or the trend (- 0.38, 95% CI: - 1.02, 0.26; P = 0.20). CONCLUSIONS: Our analysis showed suboptimal reporting quality in meta-epidemiological studies, with no improvement post-2017 guidelines. This potential shortcoming could hinder stakeholders' ability to draw reliable conclusions from these studies. While preregistration could reduce reporting bias, its adoption remains low. Registration platforms could consider creating tailored types for meta-epidemiological research, and journals need to adopt more proactive measures to enforce reporting standards.


Assuntos
Estudos Epidemiológicos , Humanos , Metanálise como Assunto , Projetos de Pesquisa/normas , Fidelidade a Diretrizes , Relatório de Pesquisa/normas , Fator de Impacto de Revistas , Guias como Assunto
11.
Front Immunol ; 15: 1429895, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39229262

RESUMO

Background: Multiple sclerosis (MS) is the most common non-traumatic disabling disease affecting young adults. A definitive curative treatment is currently unavailable. Many randomized controlled trials (RCTs) have reported the efficacy of Chinese herbal medicine (CHM) on MS. Because of the uncertain quality of these RCTs, the recommendations for routine use of CHM for MS remain inconclusive. The comprehensive evaluation of the quality of RCTs of CHM for MS is urgent. Methods: Nine databases, namely, PubMed, Embase, Web of Science, Cochrane Library, EBSCO, Sinomed, Wanfang Database, China National Knowledge Infrastructure, and VIP Database, were searched from inception to September 2023. RCTs comparing CHM with placebo or pharmacological interventions for MS were considered eligible. The Consolidated Standards of Reporting Trials (CONSORT) and its extension for CHM formulas (CONSORT-CHM Formulas) checklists were used to evaluate the reporting quality of RCTs. The risk of bias was assessed using the Cochrane Risk of Bias tool. The selection criteria of high-frequency herbs for MS were those with cumulative frequency over 50% among the top-ranked herbs. Results: A total of 25 RCTs were included. In the included RCTs, 33% of the CONSORT items and 21% of the CONSORT-CHM Formulas items were reported. Eligibility title, sample size calculation, allocation concealment, randomized implementation, and blinded description in CONSORT core items were reported by less than 5% of trials. For the CONSORT-CHM Formulas, the source and authentication method of each CHM ingredient was particularly poorly reported. Most studies classified the risk of bias as "unclear" due to insufficient information. The top five most frequently used herbs were, in order, Radix Rehmanniae Preparata, Radix Rehmanniae Recens, Herba Epimedii, Scorpio, and Poria. No serious adverse effect had been reported. Conclusions: The low reporting of CONSORT items and the unclear risk of bias indicate the inadequate quality of RCTs in terms of reporting completeness and result validity. The CONSORT-CHM Formulas appropriately consider the unique characteristics of CHM, including principles, formulas, and Chinese medicinal substances. To improve the quality of RCTs on CHM for MS, researchers should adhere more closely to CONSORT-CHM Formulas guidelines and ensure comprehensive disclosure of all study design elements.


Assuntos
Medicamentos de Ervas Chinesas , Esclerose Múltipla , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Esclerose Múltipla/tratamento farmacológico , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Medicamentos de Ervas Chinesas/uso terapêutico , Medicamentos de Ervas Chinesas/efeitos adversos , Medicamentos de Ervas Chinesas/normas , Viés , Resultado do Tratamento , Projetos de Pesquisa/normas
12.
Brain Behav ; 14(9): e3629, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39262200

RESUMO

BACKGROUND: As the methodological quality and evidence level of the existing systematic reviews (SRs) on music as an intervention for depression have not been thoroughly evaluated, a systematic evaluation and re-evaluation (SERE) was conducted. METHODS: Multiple databases including PubMed, Web of Science, Embase, China National Knowledge Infrastructure, SinoMed, Wanfang, and the VIP database were searched for SRs and meta-analyses (MAs) on the effectiveness of music as an intervention for depression. The literature screening, evaluation of methodological quality, and assessment of evidence level were carried out by a team of researchers. The methodological quality was evaluated using the Assessment of Multiple Systematic Reviews 2 (AMSTAR 2) scale in accordance with the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, and the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) criteria were utilized to assess the level of evidence. RESULTS: A total of 18 SRs were included in the analysis. The 2020 PRISMA guidelines were utilized to evaluate various aspects such as search terms, funding sources, statistical methods for missing values, subgroup and sensitivity analyses, certainty assessment, excluded literature citations, assessment of publication bias, protocol information, conflicts of interest, and data availability, which were rarely reported. The evaluation of the studies using the AMSTAR 2 scale revealed that one article was rated as high quality, six were rated as low quality, and 11 were rated as very low quality. Based on the GRADE criteria evaluation, the quality of the evidence was found to be inconsistent, with reports primarily consisting of medium-quality evidence. CONCLUSION: The methodological quality of SRs/MAs of music as an intervention in depression is generally poor, and the level of evidence is generally low.


Assuntos
Musicoterapia , Humanos , Depressão/diagnóstico , Depressão/terapia , Transtorno Depressivo/diagnóstico , Transtorno Depressivo/terapia , Revisões Sistemáticas como Assunto/normas , Projetos de Pesquisa/normas
13.
Ugeskr Laeger ; 186(37)2024 Sep 09.
Artigo em Dinamarquês | MEDLINE | ID: mdl-39323247

RESUMO

This review describes that a core outcome set (COS) represents a consensus-based minimum set of outcomes to be collected and reported in clinical trials involving a particular disease or population. A COS serves as a guideline for global consensus on which outcome domains should be collected in all clinical trials. After defining what to measure, it becomes crucial to reach consensus on how to measure it. This includes the selection of appropriate outcome measurement instruments with credible measurement properties and interpretable thresholds of meaning.


Assuntos
Avaliação de Resultados em Cuidados de Saúde , Humanos , Ensaios Clínicos como Assunto/normas , Consenso , Pesquisa Biomédica/normas , Projetos de Pesquisa/normas
14.
PLoS One ; 19(9): e0310429, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39292656

RESUMO

Mediation analysis is commonly implemented in psychological, epidemiological, and social behavior studies to identify potential factors that mediate associations between exposures and physical or psychological outcomes. Various analytical tools are available to perform mediation analyses, among which Mplus is widely used due to its user-friendly interface. In practice, sumptuous results provided by Mplus, such as the estimated standardized and unstandardized effect sizes, can be difficult for researchers to choose to match their studies. Through a comprehensive review and utilizing findings from a proven study, we proposed guidelines and recommendations to help users select between standardized or unstandardized results based on data attributes and users' hypotheses. We also provided guidelines to choose from several types of standardized values based on the types of variables, including exposures, mediators, and outcomes.


Assuntos
Guias como Assunto , Análise de Mediação , Humanos , Projetos de Pesquisa/normas
15.
Angle Orthod ; 94(5): 479-487, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39230025

RESUMO

Adequate and transparent reporting is necessary for critically appraising published research, yet ample evidence suggests that the design, conduct, analysis, interpretation, and reporting of oral health research could be greatly improved. Accordingly, the Task Force on Design and Analysis in Oral Health Research, statisticians and trialists from academia and industry, identified the minimum information needed to report and evaluate observational studies and clinical trials in oral health: the OHStat guidelines. Drafts were circulated to the editors of 85 oral health journals and to Task Force members and sponsors and discussed at a December 2020 workshop attended by 49 researchers. The guidelines were subsequently revised by the Task Force writing group. The guidelines draw heavily from the Consolidated Standards for Reporting Trials (CONSORT), Strengthening the Reporting of Observational Studies in Epidemiology, and CONSORT harms guidelines, and incorporate the SAMPL guidelines for reporting statistics, the CLIP principles for documenting images, and the GRADE indicating the quality of evidence. The guidelines also recommend reporting estimates in clinically meaningful units using confidence intervals, rather than relying on P values. In addition, OHStat introduces seven new guidelines that concern the text itself, such as checking the congruence between abstract and text, structuring the discussion, and listing conclusions to make them more specific. OHStat does not replace other reporting guidelines; it incorporates those most relevant to dental research into a single document. Manuscripts using the OHStat guidelines will provide more information specific to oral health research.


Assuntos
Lista de Checagem , Ensaios Clínicos como Assunto , Estudos Observacionais como Assunto , Saúde Bucal , Humanos , Saúde Bucal/normas , Ensaios Clínicos como Assunto/normas , Pesquisa em Odontologia/normas , Projetos de Pesquisa/normas , Editoração/normas , Guias como Assunto , Relatório de Pesquisa/normas
16.
BMC Med Res Methodol ; 24(1): 193, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232661

RESUMO

BACKGROUND: Missing data are common in observational studies and often occur in several of the variables required when estimating a causal effect, i.e. the exposure, outcome and/or variables used to control for confounding. Analyses involving multiple incomplete variables are not as straightforward as analyses with a single incomplete variable. For example, in the context of multivariable missingness, the standard missing data assumptions ("missing completely at random", "missing at random" [MAR], "missing not at random") are difficult to interpret and assess. It is not clear how the complexities that arise due to multivariable missingness are being addressed in practice. The aim of this study was to review how missing data are managed and reported in observational studies that use multiple imputation (MI) for causal effect estimation, with a particular focus on missing data summaries, missing data assumptions, primary and sensitivity analyses, and MI implementation. METHODS: We searched five top general epidemiology journals for observational studies that aimed to answer a causal research question and used MI, published between January 2019 and December 2021. Article screening and data extraction were performed systematically. RESULTS: Of the 130 studies included in this review, 108 (83%) derived an analysis sample by excluding individuals with missing data in specific variables (e.g., outcome) and 114 (88%) had multivariable missingness within the analysis sample. Forty-four (34%) studies provided a statement about missing data assumptions, 35 of which stated the MAR assumption, but only 11/44 (25%) studies provided a justification for these assumptions. The number of imputations, MI method and MI software were generally well-reported (71%, 75% and 88% of studies, respectively), while aspects of the imputation model specification were not clear for more than half of the studies. A secondary analysis that used a different approach to handle the missing data was conducted in 69/130 (53%) studies. Of these 69 studies, 68 (99%) lacked a clear justification for the secondary analysis. CONCLUSION: Effort is needed to clarify the rationale for and improve the reporting of MI for estimation of causal effects from observational data. We encourage greater transparency in making and reporting analytical decisions related to missing data.


Assuntos
Estudos Observacionais como Assunto , Projetos de Pesquisa , Causalidade , Interpretação Estatística de Dados , Projetos de Pesquisa/normas
20.
BMJ Paediatr Open ; 8(1)2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39284617

RESUMO

As statistical reviewers and editors for BMJ Paediatrics Open (BMJPO), we frequently see methodological and statistical errors in articles submitted to our journal. To make a list of these common errors and propose suitable corrections, and inspired by similar efforts at other leading journals, we surveyed the statistical reviewers and editors at BMJPO to collect their 'pet peeves' and examples of best practices.(1, 2) We have divided these into seven sections: graphics; statistical significance and related issues; presentation, vocabulary, textual and tabular presentation; causality; model building, regression and choice of methods; meta-analysis; and miscellaneous. Here, we present the common errors, with brief explanations. We hope that the guidance provided here will help guide authors as they prepare their submissions to the journal, leading to higher quality and more robust research reporting.


Assuntos
Projetos de Pesquisa , Humanos , Projetos de Pesquisa/normas , Publicações Periódicas como Assunto , Interpretação Estatística de Dados , Pediatria , Estatística como Assunto/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA