Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
2.
Curr Opin Support Palliat Care ; 18(3): 107-112, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38990711

RESUMEN

PURPOSE OF REVIEW: Several innovative digital technologies have begun to be applied to diagnosing and treating migraine. We reviewed the potential benefits and opportunities from delivering migraine care through comprehensive digital clinics. RECENT FINDINGS: There are increasing applications of digitization to migraine diagnosis and management, including e-diaries, and patient self-management, especially after the COVID-19 pandemic. Digital care delivery appears to better engage chronic migraine sufferers who may struggle to present to physical clinics. SUMMARY: Digital clinics appear to be a promising treatment modality for patients with chronic migraine. They potentially minimize travel time, shorten waiting periods, improve usability, and increase access to neurologists. Additionally, they have the potential to provide care at a much lower cost than traditional physical clinics. However, the current state of evidence mostly draws on case-reports, suggesting a need for future randomized trials comparing digital interventions with standard care pathways.


Asunto(s)
COVID-19 , Trastornos Migrañosos , Telemedicina , Humanos , Trastornos Migrañosos/diagnóstico , Trastornos Migrañosos/terapia , Telemedicina/organización & administración , COVID-19/epidemiología , Automanejo/métodos , SARS-CoV-2
3.
J Med Ethics ; 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39074956

RESUMEN

Can AI substitute a human physician's second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA