Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
2.
JMIR Form Res ; 8: e59434, 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-38986153

RESUMEN

BACKGROUND: Patients find technology tools to be more approachable for seeking sensitive health-related information, such as reproductive health information. The inventive conversational ability of artificial intelligence (AI) chatbots, such as ChatGPT (OpenAI Inc), offers a potential means for patients to effectively locate answers to their health-related questions digitally. OBJECTIVE: A pilot study was conducted to compare the novel ChatGPT with the existing Google Search technology for their ability to offer accurate, effective, and current information regarding proceeding action after missing a dose of oral contraceptive pill. METHODS: A sequence of 11 questions, mimicking a patient inquiring about the action to take after missing a dose of an oral contraceptive pill, were input into ChatGPT as a cascade, given the conversational ability of ChatGPT. The questions were input into 4 different ChatGPT accounts, with the account holders being of various demographics, to evaluate potential differences and biases in the responses given to different account holders. The leading question, "what should I do if I missed a day of my oral contraception birth control?" alone was then input into Google Search, given its nonconversational nature. The results from the ChatGPT questions and the Google Search results for the leading question were evaluated on their readability, accuracy, and effective delivery of information. RESULTS: The ChatGPT results were determined to be at an overall higher-grade reading level, with a longer reading duration, less accurate, less current, and with a less effective delivery of information. In contrast, the Google Search resulting answer box and snippets were at a lower-grade reading level, shorter reading duration, more current, able to reference the origin of the information (transparent), and provided the information in various formats in addition to text. CONCLUSIONS: ChatGPT has room for improvement in accuracy, transparency, recency, and reliability before it can equitably be implemented into health care information delivery and provide the potential benefits it poses. However, AI may be used as a tool for providers to educate their patients in preferred, creative, and efficient ways, such as using AI to generate accessible short educational videos from health care provider-vetted information. Larger studies representing a diverse group of users are needed.

3.
J Med Internet Res ; 26: e54571, 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38935937

RESUMEN

BACKGROUND: Artificial intelligence, particularly chatbot systems, is becoming an instrumental tool in health care, aiding clinical decision-making and patient engagement. OBJECTIVE: This study aims to analyze the performance of ChatGPT-3.5 and ChatGPT-4 in addressing complex clinical and ethical dilemmas, and to illustrate their potential role in health care decision-making while comparing seniors' and residents' ratings, and specific question types. METHODS: A total of 4 specialized physicians formulated 176 real-world clinical questions. A total of 8 senior physicians and residents assessed responses from GPT-3.5 and GPT-4 on a 1-5 scale across 5 categories: accuracy, relevance, clarity, utility, and comprehensiveness. Evaluations were conducted within internal medicine, emergency medicine, and ethics. Comparisons were made globally, between seniors and residents, and across classifications. RESULTS: Both GPT models received high mean scores (4.4, SD 0.8 for GPT-4 and 4.1, SD 1.0 for GPT-3.5). GPT-4 outperformed GPT-3.5 across all rating dimensions, with seniors consistently rating responses higher than residents for both models. Specifically, seniors rated GPT-4 as more beneficial and complete (mean 4.6 vs 4.0 and 4.6 vs 4.1, respectively; P<.001), and GPT-3.5 similarly (mean 4.1 vs 3.7 and 3.9 vs 3.5, respectively; P<.001). Ethical queries received the highest ratings for both models, with mean scores reflecting consistency across accuracy and completeness criteria. Distinctions among question types were significant, particularly for the GPT-4 mean scores in completeness across emergency, internal, and ethical questions (4.2, SD 1.0; 4.3, SD 0.8; and 4.5, SD 0.7, respectively; P<.001), and for GPT-3.5's accuracy, beneficial, and completeness dimensions. CONCLUSIONS: ChatGPT's potential to assist physicians with medical issues is promising, with prospects to enhance diagnostics, treatments, and ethics. While integration into clinical workflows may be valuable, it must complement, not replace, human expertise. Continued research is essential to ensure safe and effective implementation in clinical environments.


Asunto(s)
Toma de Decisiones Clínicas , Humanos , Inteligencia Artificial
4.
Artículo en Inglés | MEDLINE | ID: mdl-38898884

RESUMEN

Human papillomavirus (HPV) vaccinations are lower than expected. To protect the onset of head and neck cancers, innovative strategies to improve the rates are needed. Artificial intelligence may offer some solutions, specifically conversational agents to perform counseling methods. We present our efforts in developing a dialogue model for automating motivational interviewing (MI) to encourage HPV vaccination. We developed a formalized dialogue model for MI using an existing ontology-based framework to manifest a computable representation using OWL2. New utterance classifications were identified along with the ontology that encodes the dialogue model. Our work is available on GitHub under the GPL v.3. We discuss how an ontology-based model of MI can help standardize/formalize MI counseling for HPV vaccine uptake. Our future steps will involve assessing MI fidelity of the ontology model, operationalization, and testing the dialogue model in a simulation with live participants.

5.
JMIR Hum Factors ; 11: e54581, 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38683664

RESUMEN

BACKGROUND: The use of chatbots in mental health support has increased exponentially in recent years, with studies showing that they may be effective in treating mental health problems. More recently, the use of visual avatars called digital humans has been introduced. Digital humans have the capability to use facial expressions as another dimension in human-computer interactions. It is important to study the difference in emotional response and usability preferences between text-based chatbots and digital humans for interacting with mental health services. OBJECTIVE: This study aims to explore to what extent a digital human interface and a text-only chatbot interface differed in usability when tested by healthy participants, using BETSY (Behavior, Emotion, Therapy System, and You) which uses 2 distinct interfaces: a digital human with anthropomorphic features and a text-only user interface. We also set out to explore how chatbot-generated conversations on mental health (specific to each interface) affected self-reported feelings and biometrics. METHODS: We explored to what extent a digital human with anthropomorphic features differed from a traditional text-only chatbot regarding perception of usability through the System Usability Scale, emotional reactions through electroencephalography, and feelings of closeness. Healthy participants (n=45) were randomized to 2 groups that used a digital human with anthropomorphic features (n=25) or a text-only chatbot with no such features (n=20). The groups were compared by linear regression analysis and t tests. RESULTS: No differences were observed between the text-only and digital human groups regarding demographic features. The mean System Usability Scale score was 75.34 (SD 10.01; range 57-90) for the text-only chatbot versus 64.80 (SD 14.14; range 40-90) for the digital human interface. Both groups scored their respective chatbot interfaces as average or above average in usability. Women were more likely to report feeling annoyed by BETSY. CONCLUSIONS: The text-only chatbot was perceived as significantly more user-friendly than the digital human, although there were no significant differences in electroencephalography measurements. Male participants exhibited lower levels of annoyance with both interfaces, contrary to previously reported findings.


Asunto(s)
Interfaz Usuario-Computador , Humanos , Femenino , Masculino , Adulto , Voluntarios Sanos , Salud Mental , Electroencefalografía/métodos , Emociones
6.
JMIR Ment Health ; 11: e55988, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38593424

RESUMEN

BACKGROUND: Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz's theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics. OBJECTIVE: This study aimed to (1) evaluate whether the STBV can measure value-like constructs within leading LLMs and (2) determine whether LLMs exhibit distinct value-like patterns from humans and each other. METHODS: In total, 4 LLMs (Bard, Claude 2, Generative Pretrained Transformer [GPT]-3.5, GPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess value-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs' value profiles, their results were compared to published data from a diverse sample of 53,472 individuals across 49 nations who had completed the PVQ-RR. This allowed us to assess whether the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests. RESULTS: The PVQ-RR showed good reliability and validity for quantifying value-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs' value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction, while de-emphasizing achievement, power, and security relative to humans. Successful discriminant analysis differentiated the 4 LLMs' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs' responses when presented with mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational value-like constructs that shape their decision-making. CONCLUSIONS: This study leveraged the STBV to map the motivational value-like infrastructure underpinning leading LLMs. Although the study demonstrated the STBV can effectively characterize value-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the LLMs underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental health care must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.


Asunto(s)
Técnicos Medios en Salud , Salud Mental , Humanos , Estudios Transversales , Reproducibilidad de los Resultados , Lenguaje
8.
Sensors (Basel) ; 23(15)2023 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-37571587

RESUMEN

With the popularity of ChatGPT, there has been increasing attention towards dialogue systems. Researchers are dedicated to designing a knowledgeable model that can engage in conversations like humans. Traditional seq2seq dialogue models often suffer from limited performance and the issue of generating safe responses. In recent years, large-scale pretrained language models have demonstrated their powerful capabilities across various domains. Many studies have leveraged these pretrained models for dialogue tasks to address concerns such as safe response generation. Pretrained models can enhance responses by carrying certain knowledge information after being pre-trained on large-scale data. However, when specific knowledge is required in a particular domain, the model may still generate bland or inappropriate responses, and the interpretability of such models is poor. Therefore, in this paper, we propose the KRP-DS model. We design a knowledge module that incorporates a knowledge graph as external knowledge in the dialogue system. The module utilizes contextual information for path reasoning and guides knowledge prediction. Finally, the predicted knowledge is used to enhance response generation. Experimental results show that our proposed model can effectively improve the quality and diversity of responses while having better interpretability, and outperforms baseline models in both automatic and human evaluations.


Asunto(s)
Comunicación , Reconocimiento de Normas Patrones Automatizadas , Humanos , Conocimiento , Bases del Conocimiento , Lenguaje
9.
Autoimmun Rev ; 22(8): 103360, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37211242

RESUMEN

The field of medical research has been always full of innovation and huge leaps revolutionizing the scientific world. In the recent years, we have witnessed this firsthand by the evolution of Artificial Intelligence (AI), with ChatGPT being the most recent example. ChatGPT is a language chat bot which generates human-like texts based on data from the internet. If viewed from a medical point view, ChatGPT has shown capabilities of composing medical texts similar to those depicted by experienced authors, to solve clinical cases, to provide medical solutions, among other fascinating performances. Nevertheless, the value of the results, limitations, and clinical implications still need to be carefully evaluated. In our current paper on the role of ChatGPT in clinical medicine, particularly in the field of autoimmunity, we aimed to illustrate the implication of this technology alongside the latest utilization and limitations. In addition, we included an expert opinion on the cyber-related aspects of the bot potentially contributing to the risks attributed to its use, alongside proposed defense mechanisms. All of that, while taking into consideration the rapidity of the continuous improvement AI experiences on a daily basis.


Asunto(s)
Autoinmunidad , Investigación Biomédica , Humanos , Inteligencia Artificial , Internet
10.
Healthcare (Basel) ; 8(2)2020 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-32503298

RESUMEN

Since the discovery of the Coronavirus (nCOV-19), it has become a global pandemic. At the same time, it has been a great challenge to hospitals or healthcare staff to manage the flow of the high number of cases. Especially in remote areas, it is becoming more difficult to consult a medical specialist when the immediate hit of the epidemic has occurred. Thus, it becomes obvious that if effectively designed and deployed chatbot can help patients living in remote areas by promoting preventive measures, virus updates, and reducing psychological damage caused by isolation and fear. This study presents the design of a sophisticated artificial intelligence (AI) chatbot for the purpose of diagnostic evaluation and recommending immediate measures when patients are exposed to nCOV-19. In addition, presenting a virtual assistant can also measure the infection severity and connects with registered doctors when symptoms become serious.

11.
Artículo en Inglés | MEDLINE | ID: mdl-31632598

RESUMEN

This paper will discuss whether bots, particularly chat bots, can be useful in public health research and health or pharmacy systems operations. Bots have been discussed for many years; particularly when coupled with artificial intelligence, they offer the opportunity of automating mundane or error-ridden processes and tasks by replacing human involvement. This paper will discuss areas where there are greater advances in the use of bots, as well as areas that may benefit from the use of bots, and will offer practical ways to get started with bot technology. Several popular bot applications and bot development tools along with practical security considerations will be discussed, and a toolbox that one can begin to use to implement bots will be presented.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA