Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Radiol Artif Intell ; 6(2): e230205, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38265301

RESUMEN

This study evaluated the ability of generative large language models (LLMs) to detect speech recognition errors in radiology reports. A dataset of 3233 CT and MRI reports was assessed by radiologists for speech recognition errors. Errors were categorized as clinically significant or not clinically significant. Performances of five generative LLMs-GPT-3.5-turbo, GPT-4, text-davinci-003, Llama-v2-70B-chat, and Bard-were compared in detecting these errors, using manual error detection as the reference standard. Prompt engineering was used to optimize model performance. GPT-4 demonstrated high accuracy in detecting clinically significant errors (precision, 76.9%; recall, 100%; F1 score, 86.9%) and not clinically significant errors (precision, 93.9%; recall, 94.7%; F1 score, 94.3%). Text-davinci-003 achieved F1 scores of 72% and 46.6% for clinically significant and not clinically significant errors, respectively. GPT-3.5-turbo obtained 59.1% and 32.2% F1 scores, while Llama-v2-70B-chat scored 72.8% and 47.7%. Bard showed the lowest accuracy, with F1 scores of 47.5% and 20.9%. GPT-4 effectively identified challenging errors of nonsense phrases and internally inconsistent statements. Longer reports, resident dictation, and overnight shifts were associated with higher error rates. In conclusion, advanced generative LLMs show potential for automatic detection of speech recognition errors in radiology reports. Keywords: CT, Large Language Model, Machine Learning, MRI, Natural Language Processing, Radiology Reports, Speech, Unsupervised Learning Supplemental material is available for this article.


Asunto(s)
Camélidos del Nuevo Mundo , Sistemas de Información Radiológica , Radiología , Percepción del Habla , Animales , Habla , Software de Reconocimiento del Habla , Reproducibilidad de los Resultados
3.
J Surg Educ ; 78(5): 1419-1424, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33487587

RESUMEN

OBJECTIVE: We describe a pipeline for creating and publishing online schematic 3D anatomical models, that requires minimal resources and facilitates an intuitive understanding of complex surgical structures, using the inguinal canal as an example. DESIGN: The open source 3D modeling software Blender1 was used to generate the inguinal canal model. With screen recording enabled, the model was annotated within a 3D space and the resultant video tutorial uploaded to YouTube. The 3D model was also exported to an online web portal that students could navigate independently. Feedback was collated from YouTube and the online platform over two years via video comments and an online form for platform visitors. SETTING: Department of Surgery, Western Precinct, University of Melbourne, Melbourne, Australia. PARTICIPANTS: A total of 5,438 students utilized the online platform over the past 24 months. Video tutorials depicting the inguinal canal model were viewed a total of 162,181 times across the same period. RESULTS: Feedback was uniformly positive with a predominant theme of faster comprehension times that were attributed to the visuospatial feedback complementing traditional resources. CONCLUSIONS: The development of online 3D schematic models is achievable with the use of free and readily accessible computer software. These models allow students to "walk through" complex anatomical areas, which may enable them to better orientate and understand previously difficult to teach surgical concepts.


Asunto(s)
Imagenología Tridimensional , Modelos Anatómicos , Retroalimentación , Humanos , Programas Informáticos , Estudiantes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA