Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Am Med Inform Assoc ; 31(8): 1714-1724, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38934289

RESUMEN

OBJECTIVES: The surge in patient portal messages (PPMs) with increasing needs and workloads for efficient PPM triage in healthcare settings has spurred the exploration of AI-driven solutions to streamline the healthcare workflow processes, ensuring timely responses to patients to satisfy their healthcare needs. However, there has been less focus on isolating and understanding patient primary concerns in PPMs-a practice which holds the potential to yield more nuanced insights and enhances the quality of healthcare delivery and patient-centered care. MATERIALS AND METHODS: We propose a fusion framework to leverage pretrained language models (LMs) with different language advantages via a Convolution Neural Network for precise identification of patient primary concerns via multi-class classification. We examined 3 traditional machine learning models, 9 BERT-based language models, 6 fusion models, and 2 ensemble models. RESULTS: The outcomes of our experimentation underscore the superior performance achieved by BERT-based models in comparison to traditional machine learning models. Remarkably, our fusion model emerges as the top-performing solution, delivering a notably improved accuracy score of 77.67 ± 2.74% and an F1 score of 74.37 ± 3.70% in macro-average. DISCUSSION: This study highlights the feasibility and effectiveness of multi-class classification for patient primary concern detection and the proposed fusion framework for enhancing primary concern detection. CONCLUSIONS: The use of multi-class classification enhanced by a fusion of multiple pretrained LMs not only improves the accuracy and efficiency of patient primary concern identification in PPMs but also aids in managing the rising volume of PPMs in healthcare, ensuring critical patient communications are addressed promptly and accurately.


Asunto(s)
Aprendizaje Automático , Portales del Paciente , Humanos , Redes Neurales de la Computación , Procesamiento de Lenguaje Natural
2.
Heliyon ; 10(4): e26404, 2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38404885

RESUMEN

Incorporating environmental, social, and governance (ESG) criteria is essential for promoting sustainability in business and is considered a set of principles that can increase a firm's value. This research proposes a strategy using text-based automated techniques to rate ESG. For autonomous classification, data were collected from the news archive LexisNexis and classified as E, S, or G based on the ESG materials provided by the Refinitiv-Sustainable Leadership Monitor, which has over 450 metrics. In addition, Bidirectional Encoder Representations from Transformers (BERT), Robustly optimized BERT approach (RoBERTa), and A Lite BERT (ALBERT) models were trained to accurately categorize preprocessed ESG documents using a voting ensemble model, and their performances were measured. The accuracy of the ensemble model utilizing BERT and ALBERT was found to be 80.79% with batch size 20. Additionally, this research validated the performance of the framework for companies included in the Dow Jones Industrial Average (DJIA) and compared it with the grade provided by Morgan Stanley Capital International (MSCI), a globally renowned ESG rating agency known for having the highest creditworthiness. This study supports the use of sophisticated natural language processing (NLP) techniques to attain important knowledge from large amounts of text-based data to improve ESG assessment criteria established by different rating agencies.

3.
J Med Internet Res ; 26: e48443, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38271060

RESUMEN

BACKGROUND: The widespread use of electronic health records in the clinical and biomedical fields makes the removal of protected health information (PHI) essential to maintain privacy. However, a significant portion of information is recorded in unstructured textual forms, posing a challenge for deidentification. In multilingual countries, medical records could be written in a mixture of more than one language, referred to as code mixing. Most current clinical natural language processing techniques are designed for monolingual text, and there is a need to address the deidentification of code-mixed text. OBJECTIVE: The aim of this study was to investigate the effectiveness and underlying mechanism of fine-tuned pretrained language models (PLMs) in identifying PHI in the code-mixed context. Additionally, we aimed to evaluate the potential of prompting large language models (LLMs) for recognizing PHI in a zero-shot manner. METHODS: We compiled the first clinical code-mixed deidentification data set consisting of text written in Chinese and English. We explored the effectiveness of fine-tuned PLMs for recognizing PHI in code-mixed content, with a focus on whether PLMs exploit naming regularity and mention coverage to achieve superior performance, by probing the developed models' outputs to examine their decision-making process. Furthermore, we investigated the potential of prompt-based in-context learning of LLMs for recognizing PHI in code-mixed text. RESULTS: The developed methods were evaluated on a code-mixed deidentification corpus of 1700 discharge summaries. We observed that different PHI types had preferences in their occurrences within the different types of language-mixed sentences, and PLMs could effectively recognize PHI by exploiting the learned name regularity. However, the models may exhibit suboptimal results when regularity is weak or mentions contain unknown words that the representations cannot generate well. We also found that the availability of code-mixed training instances is essential for the model's performance. Furthermore, the LLM-based deidentification method was a feasible and appealing approach that can be controlled and enhanced through natural language prompts. CONCLUSIONS: The study contributes to understanding the underlying mechanism of PLMs in addressing the deidentification process in the code-mixed context and highlights the significance of incorporating code-mixed training instances into the model training phase. To support the advancement of research, we created a manipulated subset of the resynthesized data set available for research purposes. Based on the compiled data set, we found that the LLM-based deidentification method is a feasible approach, but carefully crafted prompts are essential to avoid unwanted output. However, the use of such methods in the hospital setting requires careful consideration of data security and privacy concerns. Further research could explore the augmentation of PLMs and LLMs with external knowledge to improve their strength in recognizing rare PHI.


Asunto(s)
Inteligencia Artificial , Registros Electrónicos de Salud , Humanos , Procesamiento de Lenguaje Natural , Privacidad , China
4.
J Healthc Inform Res ; 7(4): 433-446, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37927378

RESUMEN

Pretrained language models augmented with in-domain corpora show impressive results in biomedicine and clinical Natural Language Processing (NLP) tasks in English. However, there has been minimal work in low-resource languages. Although some pioneering works have shown promising results, many scenarios still need to be explored to engineer effective pretrained language models in biomedicine for low-resource settings. This study introduces the BioBERTurk family and four pretrained models in Turkish for biomedicine. To evaluate the models, we also introduced a labeled dataset to classify radiology reports of head CT examinations. Two parts of the reports, impressions and findings, are evaluated separately to observe the performance of models on longer and less informative text. We compared the models with the Turkish BERT (BERTurk) pretrained with general domain text, multilingual BERT (mBERT), and LSTM+attention-based baseline models. The first model initialized from BERTurk and then further pretrained with biomedical corpus performs statistically better than BERTurk, multilingual BERT, and baseline for both datasets. The second model continues to pretrain the BERTurk model by using only radiology Ph.D. theses to test the effect of task-related text. This model slightly outperformed all models on the impression dataset and showed that using only radiology-related data for continual pre-training could be effective. The third model continues to pretrain by adding radiology theses to the biomedical corpus but does not show a statistically meaningful difference for both datasets. The final model combines radiology and biomedicine corpora with the corpus of BERTurk and pretrains a BERT model from scratch. This model is the worst-performing model of the BioBERT family, even worse than BERTurk and multilingual BERT.

5.
MAbs ; 15(1): 2285904, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38010801

RESUMEN

Prior research has generated a vast amount of antibody sequences, which has allowed the pre-training of language models on amino acid sequences to improve the efficiency of antibody screening and optimization. However, compared to those for proteins, there are fewer pre-trained language models available for antibody sequences. Additionally, existing pre-trained models solely rely on embedding representations using amino acids or k-mers, which do not explicitly take into account the role of secondary structure features. Here, we present a new pre-trained model called BERT2DAb. This model incorporates secondary structure information based on self-attention to learn representations of antibody sequences. Our model achieves state-of-the-art performance on three downstream tasks, including two antigen-antibody binding classification tasks (precision: 85.15%/94.86%; recall:87.41%/86.15%) and one antigen-antibody complex mutation binding free energy prediction task (Pearson correlation coefficient: 0.77). Moreover, we propose a novel method to analyze the relationship between attention weights and contact states of pairs of subsequences in tertiary structures. This enhances the interpretability of BERT2DAb. Overall, our model demonstrates strong potential for improving antibody screening and design through downstream applications.


Asunto(s)
Aminoácidos , Proteínas , Secuencia de Aminoácidos , Proteínas/química , Aminoácidos/química , Estructura Secundaria de Proteína , Anticuerpos
6.
Methods ; 219: 8-15, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37690736

RESUMEN

Protein-ligand interaction (PLI) is a critical step for drug discovery. Recently, protein pretrained language models (PLMs) have showcased exceptional performance across a wide range of protein-related tasks. However, a significant heterogeneity exists between the PLM and PLI tasks, leading to a degree of uncertainty. In this study, we propose a method that quantitatively assesses the significance of protein PLMs in PLI prediction. Specifically, we analyze the performance of three widely-used protein PLMs (TAPE, ESM-1b, and ProtTrans) on three PLI tasks (PDBbind, Kinase, and DUD-E). The model with pre-training consistently achieves improved performance and decreased time cost, demonstrating that enhance both the accuracy and efficiency of PLI prediction. By quantitatively assessing the transferability, the optimal PLM for each PLI task is identified without the need for costly transfer experiments. Additionally, we examine the contributions of PLMs on the distribution of feature space, highlighting the improved discriminability after pre-training. Our findings provide insights into the mechanisms underlying PLMs in PLI prediction and pave the way for the design of more interpretable and accurate PLMs in the future. Code and data are freely available at https://github.com/brian-zZZ/PLM-PLI.


Asunto(s)
Lenguaje , Proteínas , Ligandos
7.
J Med Internet Res ; 25: e48115, 2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37632414

RESUMEN

BACKGROUND: Biomedical relation extraction (RE) is of great importance for researchers to conduct systematic biomedical studies. It not only helps knowledge mining, such as knowledge graphs and novel knowledge discovery, but also promotes translational applications, such as clinical diagnosis, decision-making, and precision medicine. However, the relations between biomedical entities are complex and diverse, and comprehensive biomedical RE is not yet well established. OBJECTIVE: We aimed to investigate and improve large-scale RE with diverse relation types and conduct usability studies with application scenarios to optimize biomedical text mining. METHODS: Data sets containing 125 relation types with different entity semantic levels were constructed to evaluate the impact of entity semantic information on RE, and performance analysis was conducted on different model architectures and domain models. This study also proposed a continued pretraining strategy and integrated models with scripts into a tool. Furthermore, this study applied RE to the COVID-19 corpus with article topics and application scenarios of clinical interest to assess and demonstrate its biological interpretability and usability. RESULTS: The performance analysis revealed that RE achieves the best performance when the detailed semantic type is provided. For a single model, PubMedBERT with continued pretraining performed the best, with an F1-score of 0.8998. Usability studies on COVID-19 demonstrated the interpretability and usability of RE, and a relation graph database was constructed, which was used to reveal existing and novel drug paths with edge explanations. The models (including pretrained and fine-tuned models), integrated tool (Docker), and generated data (including the COVID-19 relation graph database and drug paths) have been made publicly available to the biomedical text mining community and clinical researchers. CONCLUSIONS: This study provided a comprehensive analysis of RE with diverse relation types. Optimized RE models and tools for diverse relation types were developed, which can be widely used in biomedical text mining. Our usability studies provided a proof-of-concept demonstration of how large-scale RE can be leveraged to facilitate novel research.


Asunto(s)
COVID-19 , Humanos , Minería de Datos , Bases de Datos Factuales , Conocimiento , Medicina de Precisión
8.
J Biomed Inform ; 145: 104459, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37531999

RESUMEN

Document-level relation extraction is designed to recognize connections between entities a cross sentences or between sentences. The current mainstream document relation extraction model is mainly based on the graph method or combined with the pre-trained language model, which leads to the relatively complex process of the whole workflow. In this work, we propose biomedical relation extraction based on prompt learning to avoid complex relation extraction processes and obtain decent performance. Particularity, we present a model that combines prompt learning with T5 for document relation extraction, by integrating a mask template mechanism into the model. In addition, this work also proposes a few-shot relation extraction method based on the K-nearest neighbor (KNN) algorithm with prompt learning. We select similar semantic labels through KNN, and subsequently conduct the relation extraction. The results acquired from two biomedical document benchmarks indicate that our model can improve the learning of document semantic information, achieving improvements in the relation F1 score of 3.1% on CDR.


Asunto(s)
Algoritmos , Semántica , Lenguaje , Aprendizaje , Procesamiento de Lenguaje Natural
9.
Brief Bioinform ; 24(4)2023 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-37204193

RESUMEN

Determining intrinsically disordered regions of proteins is essential for elucidating protein biological functions and the mechanisms of their associated diseases. As the gap between the number of experimentally determined protein structures and the number of protein sequences continues to grow exponentially, there is a need for developing an accurate and computationally efficient disorder predictor. However, current single-sequence-based methods are of low accuracy, while evolutionary profile-based methods are computationally intensive. Here, we proposed a fast and accurate protein disorder predictor LMDisorder that employed embedding generated by unsupervised pretrained language models as features. We showed that LMDisorder performs best in all single-sequence-based methods and is comparable or better than another language-model-based technique in four independent test sets, respectively. Furthermore, LMDisorder showed equivalent or even better performance than the state-of-the-art profile-based technique SPOT-Disorder2. In addition, the high computation efficiency of LMDisorder enabled proteome-scale analysis of human, showing that proteins with high predicted disorder content were associated with specific biological functions. The datasets, the source codes, and the trained model are available at https://github.com/biomed-AI/LMDisorder.


Asunto(s)
Proteoma , Programas Informáticos , Humanos , Secuencia de Aminoácidos , Evolución Biológica
10.
Brief Bioinform ; 24(3)2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-36964722

RESUMEN

Protein function prediction is an essential task in bioinformatics which benefits disease mechanism elucidation and drug target discovery. Due to the explosive growth of proteins in sequence databases and the diversity of their functions, it remains challenging to fast and accurately predict protein functions from sequences alone. Although many methods have integrated protein structures, biological networks or literature information to improve performance, these extra features are often unavailable for most proteins. Here, we propose SPROF-GO, a Sequence-based alignment-free PROtein Function predictor, which leverages a pretrained language model to efficiently extract informative sequence embeddings and employs self-attention pooling to focus on important residues. The prediction is further advanced by exploiting the homology information and accounting for the overlapping communities of proteins with related functions through the label diffusion algorithm. SPROF-GO was shown to surpass state-of-the-art sequence-based and even network-based approaches by more than 14.5, 27.3 and 10.1% in area under the precision-recall curve on the three sub-ontology test sets, respectively. Our method was also demonstrated to generalize well on non-homologous proteins and unseen species. Finally, visualization based on the attention mechanism indicated that SPROF-GO is able to capture sequence domains useful for function prediction. The datasets, source codes and trained models of SPROF-GO are available at https://github.com/biomed-AI/SPROF-GO. The SPROF-GO web server is freely available at http://bio-web1.nscc-gz.cn/app/sprof-go.


Asunto(s)
Proteínas , Programas Informáticos , Proteínas/metabolismo , Algoritmos , Biología Computacional/métodos , Ontología de Genes
11.
Am J Clin Nutr ; 117(3): 553-563, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36872019

RESUMEN

BACKGROUND: Food categorization and nutrient profiling are labor intensive, time consuming, and costly tasks, given the number of products and labels in large food composition databases and the dynamic food supply. OBJECTIVES: This study used a pretrained language model and supervised machine learning to automate food category classification and nutrition quality score prediction based on manually coded and validated data, and compared prediction results with models using bag-of-words and structured nutrition facts as inputs for predictions. METHODS: Food product information from University of Toronto Food Label Information and Price Database 2017 (n = 17,448) and University of Toronto Food Label Information and Price Database 2020 (n = 74,445) databases were used. Health Canada's Table of Reference Amounts (TRA) (24 categories and 172 subcategories) was used for food categorization and the Food Standards of Australia and New Zealand (FSANZ) nutrient profiling system was used for nutrition quality score evaluation. TRA categories and FSANZ scores were manually coded and validated by trained nutrition researchers. A modified pretrained sentence-Bidirectional Encoder Representations from Transformers model was used to encode unstructured text from food labels into lower-dimensional vector representations, followed by supervised machine learning algorithms (i.e., elastic net, k-Nearest Neighbors, and XGBoost) for multiclass classification and regression tasks. RESULTS: Pretrained language model representations utilized by the XGBoost multiclass classification algorithm reached overall accuracy scores of 0.98 and 0.96 in predicting food TRA major and subcategories, outperforming bag-of-words methods. For FSANZ score prediction, our proposed method reached a similar prediction accuracy (R2: 0.87 and MSE: 14.4) compared with bag-of-words methods (R2: 0.72-0.84; MSE: 30.3-17.6), whereas structured nutrition facts machine learning model performed the best (R2: 0.98; MSE: 2.5). The pretrained language model had a higher generalizable ability on the external test datasets than bag-of-words methods. CONCLUSIONS: Our automation achieved high accuracy in classifying food categories and predicting nutrition quality scores using text information found on food labels. This approach is effective and generalizable in a dynamic food environment, where large amounts of food label data can be obtained from websites.


Asunto(s)
Alimentos , Procesamiento de Lenguaje Natural , Humanos , Valor Nutritivo , Aprendizaje Automático , Estado Nutricional
12.
Brief Bioinform ; 23(6)2022 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-36274238

RESUMEN

More than one-third of the proteins contain metal ions in the Protein Data Bank. Correct identification of metal ion-binding residues is important for understanding protein functions and designing novel drugs. Due to the small size and high versatility of metal ions, it remains challenging to computationally predict their binding sites from protein sequence. Existing sequence-based methods are of low accuracy due to the lack of structural information, and time-consuming owing to the usage of multi-sequence alignment. Here, we propose LMetalSite, an alignment-free sequence-based predictor for binding sites of the four most frequently seen metal ions in BioLiP (Zn2+, Ca2+, Mg2+ and Mn2+). LMetalSite leverages the pretrained language model to rapidly generate informative sequence representations and employs transformer to capture long-range dependencies. Multi-task learning is adopted to compensate for the scarcity of training data and capture the intrinsic similarities between different metal ions. LMetalSite was shown to surpass state-of-the-art structure-based methods by more than 19.7, 14.4, 36.8 and 12.6% in area under the precision recall on the four independent tests, respectively. Further analyses indicated that the self-attention modules are effective to learn the structural contexts of residues from protein sequence. We provide the data sets, source codes and trained models of LMetalSite at https://github.com/biomed-AI/LMetalSite.


Asunto(s)
Lenguaje , Proteínas , Conformación Proteica , Unión Proteica , Sitios de Unión , Proteínas/química , Metales/química , Metales/metabolismo , Iones/química
13.
J Biomed Inform ; 135: 104220, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36229001

RESUMEN

BACKGROUND: Increasing number of chest X-ray (CXR) examinations in radiodiagnosis departments burdens radiologists' and makes the timely generation of accurate radiological reports highly challenging. An automatic radiological report generation (ARRG) system is envisaged to generate radiographic reports with minimal human intervention, ease radiologists' burden, and smoothen the clinical workflow. The success of an ARRG system depends on two critical factors: i) quality of the features extracted by the ARRG system from the CXR images, and ii) quality of the linguistic expression generated by the ARRG system describing the normalities and abnormalities as indicated by the extracted features. Most of the existing ARRG systems miserably fail due to the latter factor and do not generate clinically acceptable reports because they ignore the contextual importance of the medical terms. OBJECTIVES: The advent of contextual word embeddings, like ELMo and BERT, has revolutionized several natural language processing (NLP) tasks. A contextual embedding represents a word based on its context. The main objective of this work is to develop an ARRG system that uses contextual word embeddings to generate clinically accurate reports from CXR images. METHODS: We present an end-to-end deep neural network that uses contextual word representations for generating clinically useful radiological reports from CXR images. The proposed network, termed as RadioBERT, uses DistilBERT for contextual word representation and leverages transfer learning. Additionally, due to the importance of abnormal observations over the normal ones, the network reorders the generated sentences by applying sentiment analysis to keep abnormal descriptions on the top of the generated report. RESULTS: The empirical study consisting of several experiments performed on the OpenI dataset indicates that CNN+Hierarchical LSTM with DistilBERT embedding improves the benchmark performance. We have been able to achieve the following performance scores: BLEU-1 = 0.772, BLEU-2 = 0.770, BLEU-3 = 0.768, BLEU-4 = 0.767, CIDEr = 0.5563, and ROUGE = 0.897. CONCLUSION: The proposed method improves the state-of-the-art performance scores by a substantial margin. It is concluded that the use of word embeddings generated by DistilBERT enhances the performance of hierarchical LSTM for producing clinical reports by significant margin.


Asunto(s)
Aprendizaje Profundo , Humanos , Rayos X , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Lingüística
14.
JMIR Med Inform ; 8(7): e17958, 2020 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-32723719

RESUMEN

BACKGROUND: Depression is a serious personal and public mental health problem. Self-reporting is the main method used to diagnose depression and to determine the severity of depression. However, it is not easy to discover patients with depression owing to feelings of shame in disclosing or discussing their mental health conditions with others. Moreover, self-reporting is time-consuming, and usually leads to missing a certain number of cases. Therefore, automatic discovery of patients with depression from other sources such as social media has been attracting increasing attention. Social media, as one of the most important daily communication systems, connects large quantities of people, including individuals with depression, and provides a channel to discover patients with depression. In this study, we investigated deep-learning methods for depression risk prediction using data from Chinese microblogs, which have potential to discover more patients with depression and to trace their mental health conditions. OBJECTIVE: The aim of this study was to explore the potential of state-of-the-art deep-learning methods on depression risk prediction from Chinese microblogs. METHODS: Deep-learning methods with pretrained language representation models, including bidirectional encoder representations from transformers (BERT), robustly optimized BERT pretraining approach (RoBERTa), and generalized autoregressive pretraining for language understanding (XLNET), were investigated for depression risk prediction, and were compared with previous methods on a manually annotated benchmark dataset. Depression risk was assessed at four levels from 0 to 3, where 0, 1, 2, and 3 denote no inclination, and mild, moderate, and severe depression risk, respectively. The dataset was collected from the Chinese microblog Weibo. We also compared different deep-learning methods with pretrained language representation models in two settings: (1) publicly released pretrained language representation models, and (2) language representation models further pretrained on a large-scale unlabeled dataset collected from Weibo. Precision, recall, and F1 scores were used as performance evaluation measures. RESULTS: Among the three deep-learning methods, BERT achieved the best performance with a microaveraged F1 score of 0.856. RoBERTa achieved the best performance with a macroaveraged F1 score of 0.424 on depression risk at levels 1, 2, and 3, which represents a new benchmark result on the dataset. The further pretrained language representation models demonstrated improvement over publicly released prediction models. CONCLUSIONS: We applied deep-learning methods with pretrained language representation models to automatically predict depression risk using data from Chinese microblogs. The experimental results showed that the deep-learning methods performed better than previous methods, and have greater potential to discover patients with depression and to trace their mental health conditions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA