Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Explor Res Clin Soc Pharm ; 15: 100478, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39139501

RESUMEN

Introduction: Students in pharmacy are positive towards integrating artificial intelligence and ChatGPT into their practice. The aim of this study was to investigate the direct short-term learning effect of using Chat GPT by pharmacy students. Methods: This was an experimental randomized study. Students were allocated into two groups; the intervention group (n = 15) used all study tools and ChatGPT, while the control group (n = 16) used all study tools, except ChatGPT. Differences between groups was measured by how well they performed on a knowledge test before and after a short study period. Results: No significant difference was found between the intervention and control groups in level of competence in the pretest score (p = 0.28). There was also no significant effect of using ChatGPT, with a mean adjusted difference of 0.5 points on a 12-point scale. However there was a trend towards a higher proportion of ChatGPT participants having a large (at least four point) increase in score (4 out of 15) vs control group (1 out of 16). Conclusion: There is a potential for positive effects of ChatGPT on learning outcomes in pharmacy students, however the current study was underpowered to measure a statistically significant effect of ChatGPT on short term learning.

2.
Brain Commun ; 5(6): fcad318, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38046096

RESUMEN

Though phonemic fluency tasks are traditionally indexed by the number of correct responses, the underlying disorder may shape the specific choice of words-both correct and erroneous. We report the first comprehensive qualitative analysis of incorrect and correct words generated on the phonemic ('S') fluency test, in a large sample of patients (n = 239) with focal, unilateral frontal or posterior lesions and healthy controls (n = 136). We conducted detailed qualitative analyses of the single words generated in the phonemic fluency task using categorical descriptions for different types of errors, low-frequency words and clustering/switching. We further analysed patients' and healthy controls' entire sequences of words by employing stochastic block modelling of Generative Pretrained Transformer 3-based deep language representations. We conducted predictive modelling to investigate whether deep language representations of word sequences improved the accuracy of detecting the presence of frontal lesions using the phonemic fluency test. Our qualitative analyses of the single words generated revealed several novel findings. For the different types of errors analysed, we found a non-lateralized frontal effect for profanities, left frontal effects for proper nouns and permutations and a left posterior effect for perseverations. For correct words, we found a left frontal effect for low-frequency words. Our novel large language model-based approach found five distinct communities whose varied word selection patterns reflected characteristic demographic and clinical features. Predictive modelling showed that a model based on Generative Pretrained Transformer 3-derived word sequence representations predicted the presence of frontal lesions with greater fidelity than models of native features. Our study reveals a characteristic pattern of phonemic fluency responses produced by patients with frontal lesions. These findings demonstrate the significant inferential and diagnostic value of characterizing qualitative features of phonemic fluency performance with large language models and stochastic block modelling.

3.
Disabil Rehabil Assist Technol ; 18(7): 1011-1025, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-34455895

RESUMEN

PURPOSE: To examine the effect of a communication intervention package on expressive communication and visual attention in individuals with Rett syndrome. MATERIALS AND METHODS: A modified withdrawal (A-B1-A1-B2-A2) single case experimental design with a direct inter-subject replication across three participants was applied. Three women with Rett syndrome participated. The study took place over a six-week period and comprised 32 sessions with each participant. All sessions were video recorded. During the intervention the communication partner used aided language modelling on a gaze-controlled device in combination with using responsive partner strategies. Expressive communication was assessed as synthesised words per minute and unique synthesised words per minute. Visual attention was assessed as rate of focused gazes (1 s or longer) in interaction. RESULTS: An intervention effect was found on the rate of unique words for all participants. The rate of words increased for two participants when the intervention was introduced but no withdrawal effect could be seen. An intervention effect on visual attention could be seen for one participant. The intervention appeared to have social validity as reported by caregivers. CONCLUSION: Aided language modelling (ALM), while using responsive partner strategies and a gaze-controlled device may be used with adult individuals with Rett syndrome to increase their rate of expressive communication. Detailed observational measures revealed individual learning patterns, which may provide clinically valuable insights.Implications for RehabilitationAdults with Rett syndrome may benefit from access to gaze-controlled devices in combination with responsive partner strategies.Responsive partner communication may be effective for some individuals with Rett syndrome to increase their rate of synthesised utterances.Rate of focused gazes may be considered as an outcome measure for individuals with oculomotor difficulties when introducing aided language modelling.


Asunto(s)
Equipos de Comunicación para Personas con Discapacidad , Trastornos de la Comunicación , Síndrome de Rett , Humanos , Adulto , Femenino , Comunicación , Fijación Ocular , Lenguaje , Tecnología
4.
Open Res Eur ; 3: 197, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38274893

RESUMEN

Non-standardised early vernaculars present a problem for search tools due to the high degree of variation. The challenge lies in the variation found in orthography, syntax, and lexicon between titles, incipits, and explicits in manuscript copies of the same work. Traditional search methods relying on exact string matching or regular expressions fail to address these variations comprehensively. This project presents a web-based search tool specifically designed to handle linguistic and textual variation. The software is made available as a part of the Index of Middle English Prose (IMEP). The search tool addresses the issue of variation by utilizing a database of incipits and explicits, character-based n-gram language models (LMs) built with the Stanford Research Institute Language Modelling (SRILM) toolkit, and a fuzzy search script (IMEP: FSS) written in Python. The tool optimizes for recall, retrieving multiple potential matches for a search string, without attempting to identify the 'correct' one. The search process involves looking up exact matches in the database while simultaneously using the fuzzy search script to evaluate the incipits and explicits against a model of the search string, followed by a match of the search string against models of the incipits and explicits. This two-step process shortens the processing time, which would otherwise be unreasonably long, because while using SRILM to match the search string against each incipit or explicit in the IMEP for precision could be time-consuming, running a first step where all texts are matched against a single LM built from the search string allows for faster processing. A web application, built using Django and Docker, combines the results of the direct database lookup and the fuzzy search script, presenting them as a list with exact matches followed by fuzzy matches ordered by increasing model perplexity. The tool is made available Open Access and can be adapted to other datasets.

5.
Top Cogn Sci ; 14(1): 143-162, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34118113

RESUMEN

Social media are digitalizing massive amounts of users' cognitions in terms of timelines and emotional content. Such Big Data opens unprecedented opportunities for investigating cognitive phenomena like perception, personality, and information diffusion but requires suitable interpretable frameworks. Since social media data come from users' minds, worthy candidates for this challenge are cognitive networks, models of cognition giving structure to mental conceptual associations. This work outlines how cognitive network science can open new, quantitative ways for understanding cognition through online media like: (i) reconstructing how users semantically and emotionally frame events with contextual knowledge unavailable to machine learning, (ii) investigating conceptual salience/prominence through knowledge structure in social discourse; (iii) studying users' personality traits like openness-to-experience, curiosity, and creativity through language in posts; (iv) bridging cognitive/emotional content and social dynamics via multilayer networks comparing the mindsets of influencers and followers. These advancements combine cognitive-, network- and computer science to understand cognitive mechanisms in both digital and real-world settings but come with limitations concerning representativeness, individual variability, and data integration. Such aspects are discussed along with the ethical implications of manipulating sociocognitive data. In the future, reading cognitions through networks and social media can expose cognitive biases amplified by online platforms and relevantly inform policy-making, education, and markets about complex cognitive trends.


Asunto(s)
Cognición Social , Medios de Comunicación Sociales , Cognición , Emociones , Humanos , Lenguaje
6.
Eur Clin Respir J ; 8(1): 2004664, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34868489

RESUMEN

INTRODUCTION: Smoking cessation is essential part of a successful treatment in many chronic diseases. Our aim was to analyse how actively clinicians discuss and document patients' smoking status into electronic health records (EHR) and deliver smoking cessation assistance. METHODS: We analysed the results using a combination of rule and deep learning-based algorithms. Narrative reports of all adult patients, whose treatment started between years 2010 and 2016 for one of seven common chronic diseases, were followed for two years. Smoking related sentences were first extracted with a rule-based algorithm. Subsequently, pre-trained ULMFiT-based algorithm classified each patient's smoking status as a current smoker, ex-smoker, or never smoker. A rule-based algorithm was then again used to analyse the physician-patient discussions on smoking cessation among current smokers. RESULTS: A total of 35,650 patients were studied. Of all patients, 60% were found to have a smoking status in EHR and the documentation improved over time. Smoking status was documented more actively among COPD (86%) and sleep apnoea (83%) patients compared to patients with asthma, type 1&2 diabetes, cerebral infarction and ischemic heart disease (range 44-61%). Of the current smokers (N=7,105), 49% had discussed smoking cessation with their physician. The performance of ULMFiT-based classifier was good with F-scores 79-92. CONCLUSION: Ee found that smoking status was documented in 60% of patients with chronic disease and that the clinician had discussed smoking cessation in 49% of patients who were current smokers. ULMFiT-based classifier showed good/excellent performance and allowed us to efficiently study a large number of patients' medical narratives.

7.
J Biomed Inform ; 124: 103938, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34695581

RESUMEN

The current mode of use of Electronic Health Records (EHR) elicits text redundancy. Clinicians often populate new documents by duplicating existing notes, then updating accordingly. Data duplication can lead to propagation of errors, inconsistencies and misreporting of care. Therefore, measures to quantify information redundancy play an essential role in evaluating innovations that operate on clinical narratives. This work is a quantitative examination of information redundancy in EHR notes. We present and evaluate two methods to measure redundancy: an information-theoretic approach and a lexicosyntactic and semantic model. Our first measure trains large Transformer-based language models using clinical text from a large openly available US-based ICU dataset and a large multi-site UK based Hospital. By comparing the information-theoretic efficient encoding of clinical text against open-domain corpora, we find that clinical text is ∼1.5× to ∼3× less efficient than open-domain corpora at conveying information. Our second measure, evaluates automated summarisation metrics Rouge and BERTScore to evaluate successive note pairs demonstrating lexicosyntactic and semantic redundancy, with averages from ∼43 to ∼65%.


Asunto(s)
Registros Electrónicos de Salud , Procesamiento de Lenguaje Natural , Lenguaje , Narración , Semántica
8.
Data Brief ; 31: 105951, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32671155

RESUMEN

Language modelling using neural networks requires adequate data to guarantee quality word representation which is important for natural language processing (NLP) tasks. However, African languages, Swahili in particular, have been disadvantaged and most of them are classified as low resource languages because of inadequate data for NLP. In this article, we derive and contribute unannotated Swahili dataset, Swahili syllabic alphabet and Swahili word analogy dataset to address the need for language processing resources especially for low resource languages. Therefore, we derive the unannotated Swahili dataset by pre-processing raw Swahili data using a Python script, formulate the syllabic alphabet and develop the Swahili word analogy dataset based on an existing English dataset. We envisage that the datasets will not only support language models but also other NLP downstream tasks such as part-of-speech tagging, machine translation and sentiment analysis.

9.
Sensors (Basel) ; 20(8)2020 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-32325814

RESUMEN

The advent of new devices, technology, machine learning techniques, and the availability of free large speech corpora results in rapid and accurate speech recognition. In the last two decades, extensive research has been initiated by researchers and different organizations to experiment with new techniques and their applications in speech processing systems. There are several speech command based applications in the area of robotics, IoT, ubiquitous computing, and different human-computer interfaces. Various researchers have worked on enhancing the efficiency of speech command based systems and used the speech command dataset. However, none of them catered to noise in the same. Noise is one of the major challenges in any speech recognition system, as real-time noise is a very versatile and unavoidable factor that affects the performance of speech recognition systems, particularly those that have not learned the noise efficiently. We thoroughly analyse the latest trends in speech recognition and evaluate the speech command dataset on different machine learning based and deep learning based techniques. A novel technique is proposed for noise robustness by augmenting noise in training data. Our proposed technique is tested on clean and noisy data along with locally generated data and achieves much better results than existing state-of-the-art techniques, thus setting a new benchmark.


Asunto(s)
Ruido , Software de Reconocimiento del Habla , Aprendizaje Profundo , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Percepción del Habla/fisiología
10.
PeerJ Comput Sci ; 6: e295, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33816946

RESUMEN

Mindset reconstruction maps how individuals structure and perceive knowledge, a map unfolded here by investigating language and its cognitive reflection in the human mind, i.e., the mental lexicon. Textual forma mentis networks (TFMN) are glass boxes introduced for extracting and understanding mindsets' structure (in Latin forma mentis) from textual data. Combining network science, psycholinguistics and Big Data, TFMNs successfully identified relevant concepts in benchmark texts, without supervision. Once validated, TFMNs were applied to the case study of distorted mindsets about the gender gap in science. Focusing on social media, this work analysed 10,000 tweets mostly representing individuals' opinions at the beginning of posts. "Gender" and "gap" elicited a mostly positive, trustful and joyous perception, with semantic associates that: celebrated successful female scientists, related gender gap to wage differences, and hoped for a future resolution. The perception of "woman" highlighted jargon of sexual harassment and stereotype threat (a form of implicit cognitive bias) about women in science "sacrificing personal skills for success". The semantic frame of "man" highlighted awareness of the myth of male superiority in science. No anger was detected around "person", suggesting that tweets got less tense around genderless terms. No stereotypical perception of "scientist" was identified online, differently from real-world surveys. This analysis thus identified that Twitter discourse mostly starting conversations promoted a majorly stereotype-free, positive/trustful perception of gender disparity, aimed at closing the gap. Hence, future monitoring against discriminating language should focus on other parts of conversations like users' replies. TFMNs enable new ways for monitoring collective online mindsets, offering data-informed ground for policy making.

11.
R Soc Open Sci ; 4(11): 170830, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-29291074

RESUMEN

It is generally believed that when a linguistic item acquires a new meaning, its overall frequency of use rises with time with an S-shaped growth curve. Yet, this claim has only been supported by a limited number of case studies. In this paper, we provide the first corpus-based large-scale confirmation of the S-curve in language change. Moreover, we uncover another generic pattern, a latency phase preceding the S-growth, during which the frequency remains close to constant. We propose a usage-based model which predicts both phases, the latency and the S-growth. The driving mechanism is a random walk in the space of frequency of use. The underlying deterministic dynamics highlights the role of a control parameter which tunes the system at the vicinity of a saddle-node bifurcation. In the neighbourhood of the critical point, the latency phase corresponds to the diffusion time over the critical region, and the S-growth to the fast convergence that follows. The durations of the two phases are computed as specific first-passage times, leading to distributions that fit well the ones extracted from our dataset. We argue that our results are not specific to the studied corpus, but apply to semantic change in general.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA