Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 110
Filtrar
1.
Heliyon ; 10(15): e34893, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39157336

RESUMEN

This study explores the extent to which Grammarly can be a reliable assessment tool for academic English writing. Ten articles published in high-status scholarly Q.1 journals and written by specialist English native speakers were used to evaluate the accuracy of Grammarly's flagged issues. The results showed that Grammarly tends to over-flag many issues resulting in many false positives; besides, it does not take into consideration optional usage in English. The study concluded that although Grammarly can identify many ambiguous instances of language use that writers would do well to review and consider for revision, it does not seem to be a reliable tool for assessing academic written English.

2.
J Psycholinguist Res ; 53(5): 62, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39138811

RESUMEN

This study investigates persuasive strategies used in the writings of Iranian university students in the field of teaching English as foreign language (TEFL). The study utilized the 7 principles of persuasive strategies presented by Cialdini (The psychology of persuasion, Quill William Morrow, New York 1984; Pre-suasion: A revolutionary way to influence and persuade, Simon & Schuster, New York 2016), which include 'reciprocity', 'commitment and consistency', 'social proof', 'liking', 'authority', 'scarcity', and 'unity'. The results indicate that strategies such as 'liking', 'unity', and 'authority' were used more frequently than other persuasive strategies. On the other hand, 'scarcity' was the least used strategy by the participants. Significant gender differences were also observed in the data. These findings have important pedagogical implications and suggest the need to incorporate persuasive strategies into instructional materials and teaching practices to enhance the persuasive writing skills of university students. Furthermore, gender differences highlight the importance of considering individual differences when teaching persuasive writing. Finally, the study discusses the pedagogical implications of these findings in the context of learning and teaching.


Asunto(s)
Comunicación Persuasiva , Estudiantes , Escritura , Humanos , Estudiantes/psicología , Irán , Masculino , Universidades , Femenino , Adulto Joven , Adulto
3.
Psychiatry Res ; 341: 116145, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39213714

RESUMEN

This study aimed to assess the ability of an artificial intelligence (AI)-based chatbot to generate abstracts from academic psychiatric articles. We provided 30 full-text psychiatric papers to ChatPDF (based on ChatGPT) and prompted generating a similar style structured or unstructured abstract. We further used 10 papers from Psychiatry Research as active comparators (unstructured format). We compared the quality of the ChatPDF-generated abstracts with the original human-written abstracts and examined the similarity, plagiarism, detected AI-content, and correctness of the AI-generated abstracts. Five experts evaluated the quality of the abstracts using a blinded approach. They also identified the abstracts written by the original authors and validated the conclusions produced by ChatPDF. We found that the similarity and plagiarism were relatively low (only 14.07% and 8.34%, respectively). The detected AI-content was 31.48% for generated structure-abstracts, 75.58% for unstructured-abstracts, and 66.48% for active comparators abstracts. For quality, generated structured-abstracts were rated similarly to originals, but unstructured ones received significantly lower scores. Experts rated 40% accuracy with structured abstracts, 73% with unstructured ones, and 77% for active comparators. However, 30% of AI-generated abstract conclusions were incorrect. In conclusion, the data organization capabilities of AI language models hold significant potential for applications to summarize information in clinical psychiatry. However, the use of ChatPDF to summarize psychiatric papers requires caution concerning accuracy.


Asunto(s)
Indización y Redacción de Resúmenes , Inteligencia Artificial , Psiquiatría , Humanos , Indización y Redacción de Resúmenes/normas , Investigación Biomédica/normas , Plagio
4.
Ann Biomed Eng ; 52(9): 2319-2324, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38977530

RESUMEN

AI shaming refers to the practice of criticizing or looking down on individuals or organizations for using AI to generate content or perform tasks. AI shaming has emerged as a recent phenomenon in academia. This paper examines the characteristics, causes, and effects of AI shaming on academic writers and researchers. AI shaming often involves dismissing the validity or authenticity of AI-assisted work, suggesting that using AI is deceitful, lazy, or less valuable than human-only efforts. The paper identifies various profiles of individuals who engage in AI shaming, including traditionalists, technophobes, and elitists, and explores their motivations. The effects of AI shaming are multifaceted, ranging from inhibited technology adoption and stifled innovation to increased stress among researchers and missed opportunities for efficiency. These consequences may hinder academic progress and limit the potential benefits of AI in research and scholarship. Despite these challenges, the paper argues that academic writers and researchers should not be ashamed of using AI when done responsibly and ethically. By embracing AI as a tool to augment human capabilities and by being transparent about its use, academic writers and researchers can lead the way in demonstrating responsible AI integration.


Asunto(s)
Inteligencia Artificial , Investigadores , Humanos , Estigma Social
5.
Cas Lek Cesk ; 162(7-8): 294-297, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38981715

RESUMEN

The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic's Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.


Asunto(s)
Redes Neurales de la Computación , Humanos , Procesamiento de Lenguaje Natural , Lenguaje , Edición/ética
6.
BMC Med Educ ; 24(1): 736, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982429

RESUMEN

BACKGROUND: Academic paper writing holds significant importance in the education of medical students, and poses a clear challenge for those whose first language is not English. This study aims to investigate the effectiveness of employing large language models, particularly ChatGPT, in improving the English academic writing skills of these students. METHODS: A cohort of 25 third-year medical students from China was recruited. The study consisted of two stages. Firstly, the students were asked to write a mini paper. Secondly, the students were asked to revise the mini paper using ChatGPT within two weeks. The evaluation of the mini papers focused on three key dimensions, including structure, logic, and language. The evaluation method incorporated both manual scoring and AI scoring utilizing the ChatGPT-3.5 and ChatGPT-4 models. Additionally, we employed a questionnaire to gather feedback on students' experience in using ChatGPT. RESULTS: After implementing ChatGPT for writing assistance, there was a notable increase in manual scoring by 4.23 points. Similarly, AI scoring based on the ChatGPT-3.5 model showed an increase of 4.82 points, while the ChatGPT-4 model showed an increase of 3.84 points. These results highlight the potential of large language models in supporting academic writing. Statistical analysis revealed no significant difference between manual scoring and ChatGPT-4 scoring, indicating the potential of ChatGPT-4 to assist teachers in the grading process. Feedback from the questionnaire indicated a generally positive response from students, with 92% acknowledging an improvement in the quality of their writing, 84% noting advancements in their language skills, and 76% recognizing the contribution of ChatGPT in supporting academic research. CONCLUSION: The study highlighted the efficacy of large language models like ChatGPT in augmenting the English academic writing proficiency of non-native speakers in medical education. Furthermore, it illustrated the potential of these models to make a contribution to the educational evaluation process, particularly in environments where English is not the primary language.


Asunto(s)
Inteligencia Artificial , Estudiantes de Medicina , Escritura , Humanos , China , Educación de Pregrado en Medicina , Masculino , Femenino , Lenguaje
7.
Public Underst Sci ; : 9636625241252565, 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38783772

RESUMEN

In recent decades, members of the general public have become increasingly reliant on findings of scientific studies for decision-making. However, scientific writing usually features a heavy use of technical language, which may pose challenges for people outside of the scientific community. To alleviate this issue, plain language summaries were introduced to provide a brief summary of scientific papers in clear and accessible language. Despite increasing attention paid to the research of plain language summaries, little is known about whether these summaries are readable for the intended audiences. Based on a large corpus sampled from six biomedical and life sciences journals, the present study examined the readability and jargon use of plain language summaries and scientific abstracts on a technical level. It was found that (1) plain language summaries were more readable than scientific abstracts, (2) the reading grade levels of plain language summaries were moderately correlated with that of scientific abstracts, (3) researchers used less jargon in plain language summaries than in scientific abstracts, and (4) the readability of and the jargon use in both plain language summaries and scientific abstracts exceeded the recommended threshold for the general public. The findings were discussed with possible explanations. Implications for academic writing and scientific communication were offered.

8.
Front Res Metr Anal ; 9: 1336190, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38694235

RESUMEN

Although previous preconceived notions discourage authors from asserting their presence in research articles (RAs), recent studies have substantiated that the use of self-mention markers offer a means to establish authorial identity and recognition in a given discipline. Few studies, however, explored specific sections of research articles to uncover how self-mentions function within each section's conventions. Exploring the use of self-mention markers, the present study aimed at comparing the method sections written by native English writers and L-1 Persian writers in the field of psychology. The corpus contained 120 RAs, with each sub-corpora including 60 RAs. The RAs were then examined structurally and functionally. The data were analyzed both quantitatively, using frequency counts and chi-square analyses, and qualitatively through content analysis. The findings indicated a significant difference between English and Persian authors concerning the frequency of self-mentions and the dimension of rhetorical functions; however, the differences in the dimensions of grammatical forms and hedging and boosting were found insignificant. Native English authors were inclined to make more use of self-mentions in their research articles. The findings of the current study can assist EAP and ESP novice researchers in taking cognizance of the conventions of authorial identity in each genre.

9.
Front Psychol ; 15: 1384629, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38784615

RESUMEN

Dependency distance (DD) is an important factor in language processing and can affect the ease with which a sentence is understood. Previous studies have investigated the role of DD in L2 writing, but little is known about how the native language influences DD in L2 academic writing. This study is probably the first one that investigates, though a large dataset of over 400 million words, whether the native language of L2 writers influences the DD in their academic writings. Using a dataset of over 2.2 million abstracts of articles downloaded from Scopus in the fields of Arts & Humanities and Social Sciences, the study analyzes the DD patterns, parsed by the latest version of the syntactic parser Stanford Corenlp 4.5.5, in the academic writing of L2 learners from different language backgrounds. It is found that native languages influence the DD of English L2 academic writings. When the mean dependency distance (MDD) of native languages is much longer than that of native English, the MDD of their English L2 academic writings will be much longer than that of English native academic writings. The findings of this study will deepen our insights into the influence of native language transfer on L2 academic writing, potentially shaping pedagogical strategies in L2 academic writing education.

10.
J Med Internet Res ; 26: e52935, 2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38578685

RESUMEN

BACKGROUND: Large language models (LLMs) have gained prominence since the release of ChatGPT in late 2022. OBJECTIVE: The aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities. METHODS: Two researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines. RESULTS: Ten topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy. CONCLUSIONS: ChatGPT's performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.


Asunto(s)
Inteligencia Artificial , Lenguaje , Humanos , Reproducibilidad de los Resultados , Investigadores , Escritura
11.
Anxiety Stress Coping ; : 1-14, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38602251

RESUMEN

BACKGROUND AND OBJECTIVES: Limited research has examined the mediating mechanisms underlying the association between procrastination in academic writing and negative emotional states during the COVID-19 pandemic. In the present study, we examined whether stress coping styles and self-efficacy for self-regulation of academic writing mediated the relationship between procrastination in academic writing and negative emotional states. DESIGN AND METHOD: Graduate students (N = 475, 61.7% female, Mage of students at baseline = 29.02 years, SD = 5.72) completed questionnaires at Time 1 (March 2020; Procrastination in Academic Writing and Coping Inventory for Stressful Situations), and Time 2 (June 2020; The Self-Efficacy for Self-Regulation of Academic Writing Scale and Depression, Anxiety, and Stress Scale - 21). RESULTS: Emotion-oriented coping and the self-efficacy for self-regulation of academic writing serially mediated the association between procrastination in academic writing and negative emotional states. Meanwhile, task-oriented coping and self-efficacy for self-regulation of academic writing also serially mediated the association between procrastination in academic writing and negative emotional states. CONCLUSIONS: These findings provide a plausible explanation of the roles that stress coping styles and self-efficacy for self-regulation of academic writing play in the association between procrastination in academic writing and negative emotional states.

12.
J Assist Reprod Genet ; 41(7): 1871-1880, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38619763

RESUMEN

PURPOSE: To evaluate the ability of ChatGPT-4 to generate a biomedical review article on fertility preservation. METHODS: ChatGPT-4 was prompted to create an outline for a review on fertility preservation in men and prepubertal boys. The outline provided by ChatGPT-4 was subsequently used to prompt ChatGPT-4 to write the different parts of the review and provide five references for each section. The different parts of the article and the references provided were combined to create a single scientific review that was evaluated by the authors, who are experts in fertility preservation. The experts assessed the article and the references for accuracy and checked for plagiarism using online tools. In addition, both experts independently scored the relevance, depth, and currentness of the ChatGPT-4's article using a scoring matrix ranging from 0 to 5 where higher scores indicate higher quality. RESULTS: ChatGPT-4 successfully generated a relevant scientific article with references. Among 27 statements needing citations, four were inaccurate. Of 25 references, 36% were accurate, 48% had correct titles but other errors, and 16% were completely fabricated. Plagiarism was minimal (mean = 3%). Experts rated the article's relevance highly (5/5) but gave lower scores for depth (2-3/5) and currentness (3/5). CONCLUSION: ChatGPT-4 can produce a scientific review on fertility preservation with minimal plagiarism. While precise in content, it showed factual and contextual inaccuracies and inconsistent reference reliability. These issues limit ChatGPT-4 as a sole tool for scientific writing but suggest its potential as an aid in the writing process.


Asunto(s)
Inteligencia Artificial , Preservación de la Fertilidad , Humanos , Preservación de la Fertilidad/métodos , Masculino , Escritura , Femenino
13.
Stud Health Technol Inform ; 313: 203-208, 2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38682531

RESUMEN

This study scrutinizes free AI tools tailored for supporting literature review and analysis in academic research, emphasizing their response to direct inquiries. Through a targeted keyword search, we cataloged relevant AI tools and evaluated their output variation and source validity. Our results reveal a spectrum of response qualities, with some tools integrating non-academic sources and others depending on outdated information. Notably, most tools showed a lack of transparency in source selection. Our study highlights two key limitations: the exclusion of commercial AI tools and the focus solely on tools that accept direct research queries. This raises questions about the potential capabilities of paid tools and the efficacy of combining various AI tools for enhanced research outcomes. Future research should explore the integration of diverse AI tools, assess the impact of commercial tools, and investigate the algorithms behind response variability. This study contributes to a better understanding of AI's role in academic research, emphasizing the importance of careful selection and critical evaluation of these tools in academic endeavors.


Asunto(s)
Inteligencia Artificial , Estudiantes , Humanos , Investigadores , Literatura de Revisión como Asunto
14.
Med Sci Educ ; 34(2): 439-444, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38686168

RESUMEN

The world of publication can seem intimidating and closed to the newcomer. How then does one even begin to get a foot in the door? In this paper, the authors draw from the literature and their recent lived experience as editorial interns to consider this challenge under the theme of access, and how it overlaps with the various components of academic publication. The main three components of the publication 'machine' are discussed in this article, authoring, reviewing, and editing. These are preceded by the first, and arguably foundational, interaction with academic journal publishing-reading. Without reading articles across different journals, and even in different disciplines, understanding the breadth of scholarship and its purpose is impossible. The subsequent components of authoring, reviewing, and editing, which are all enhanced by ongoing familiarity with current literature through further reading, are considered in further detail in the remainder of this article, with practical advice provided as to how to gain access and experience in each of these areas, for example, writing non-research article manuscripts, engaging in collaborative peer review, and applying for editorial opportunities (with perseverance) when the opportunity presents itself. Medical education publication can seem daunting and closed to entry-level academics. This article is written to dispel this view, and challenges the notion that the world of publication is reserved for experts only. On the contrary, newcomers to the field are essential for academic publications to retain relevance, dynamism, and innovation particularly in the face of the changing landscape of medical education.

15.
Account Res ; : 1-17, 2024 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-38516933

RESUMEN

Artificial Intelligence (AI) language models continue to expand in both access and capability. As these models have evolved, the number of academic journals in medicine and healthcare which have explored policies regarding AI-generated text has increased. The implementation of such policies requires accurate AI detection tools. Inaccurate detectors risk unnecessary penalties for human authors and/or may compromise the effective enforcement of guidelines against AI-generated content. Yet, the accuracy of AI text detection tools in identifying human-written versus AI-generated content has been found to vary across published studies. This experimental study used a sample of behavioral health publications and found problematic false positive and false negative rates from both free and paid AI detection tools. The study assessed 100 research articles from 2016-2018 in behavioral health and psychiatry journals and 200 texts produced by AI chatbots (100 by "ChatGPT" and 100 by "Claude"). The free AI detector showed a median of 27.2% for the proportion of academic text identified as AI-generated, while commercial software Originality.AI demonstrated better performance but still had limitations, especially in detecting texts generated by Claude. These error rates raise doubts about relying on AI detectors to enforce strict policies around AI text generation in behavioral health publications.

16.
Teach Learn Med ; : 1-15, 2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38551184

RESUMEN

Problem: Syrian medical research synthesis lags behind that of neighboring countries. The Syrian war has exacerbated the situation, creating obstacles such as destroyed infrastructure, inflated clinical workload, and deteriorated medical training. Poor scientific writing skills have ranked first among perceived obstacles that could be modified to improve Syrian research conduct at every academic level. However, limited access to personal and physical resources in conflict areas consistently hampers the implementation of standard professional-led interventions. Intervention: We designed a peer-run online academic writing and publishing workshop as a feasible, affordable, and sustainable training method to use in low-resource settings. This workshop covered the structure of scientific articles, academic writing basics, plagiarism, and the publication process. It was also supplemented by six practical assignments to exercise the learned skills. Context: The workshop targeted healthcare professionals and medicine, dentistry, and pharmacy trainees (undergraduate and postgraduate) at all Syrian universities. We employed a systematic design to evaluate the workshop's short- and long-term impact when using different instructional delivery methods and assignment formats. Participants were assigned in a stratified manner to four groups; two groups attended the workshop synchronously, and the other two groups attended asynchronously. One arm in each group underwent a supervised peer-review evaluation for the practical writing exercises (active), while the other arm in each group self-reviewed their work on the same exercises using exemplary solutions (passive). We assessed knowledge (30 questions), confidence in the learned skills (11 questions), and the need for further guidance in academic writing (1 question) before the workshop and one month and one year after it. Impact: One-hundred-twenty-one participants completed the workshop, showing improved knowledge, confidence, and need for guidance. At one-year follow-up, participants showed stability in these gains. Outcomes for the synchronous and asynchronous groups were similar. Completing practical assignments was associated with greater knowledge and confidence only in the active arms. Participants in the active arms engaging in the peer-review process showed greater knowledge increase and reported less need for guidance compared to those who did not engage in the peer-review. Lessons learned: Peer-run interventions can provide an effective, affordable alternative to improving scientific writing skills in settings with limited resources and expertise. Online academic writing training can show improvements regardless of method of attendance (i.e., synchronous versus asynchronous). Participation in supplementary practical exercises, especially when associated with peer-review, may improve knowledge and confidence.

17.
Br J Nurs ; 33(6): 292-298, 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38512784

RESUMEN

Nursing programmes were flexible during the COVID-19 pandemic, offering simulation to replace clinical hours and adjusting supervision and assessment. However, second-year students in two modules had lower results despite no changes to the material, team or delivery. OBJECTIVES: A retrospective cohort study was conducted, on second-year adult nursing students who submitted written assignments, to analyse recurring patterns that could explain the failure rate. METHOD: Data were analysed from 265 university students to identify patterns of association in demographics, module results and student engagement indicators. RESULTS: A positive correlation was found between age and assignment results, with older students achieving higher grades. Clustering identified three patterns of student engagement. Students demonstrating engagement with all aspects of the course (30.2%) performed significantly better than those in other clusters (P<0.001). Students with disabled student support recommendations performed notably worse than those without. All sizeable differences were resolved following the return to campus and the implementation of additional writing support. DISCUSSION: Age, cross-medium engagement and preparation were all shown to have an impact on marks. These findings can influence how higher education institutions drive and monitor engagement, as this study suggests that all parts of a blended learning approach are equally important.


Asunto(s)
Bachillerato en Enfermería , Estudiantes de Enfermería , Adulto , Humanos , Bachillerato en Enfermería/métodos , Estudios Retrospectivos , Pandemias , Aprendizaje
18.
Qual Health Res ; : 10497323231225150, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38425252

RESUMEN

Qualitative social scientists working in medical faculties have to meet multiple expectations. On the one hand, they are expected to comply with the philosophical and theoretical expectations of the social sciences. On the other hand, they may also be expected to produce publications which align with biomedical definitions and framings of quality. As interdisciplinary scholars, they must handle (at least) two sets of journal editors, peer reviewers, grant-awarding panels, and conference audiences. In this paper, we extend the current knowledge base on the 'dual expectations' challenge by drawing on Orlikowski and Yates' theoretical concept of communicative genres. A 'genre' in this context is a format of communication (e.g. letter, email, academic paper, and conference presentation) aimed at a particular audience, having a particular material form and socio-linguistic style, and governed by both formal requirements and unwritten social rules. Becoming a member of any community of practice involves becoming familiar with its accepted communicative genres and adept in using them. Academic writing, for example, is a craft that is learned through participation in the social process of communicating one's ideas to one's peers in journal articles and other formats. In this reflective paper, we show how the concept of a communicative genre can sensitise us to the conflicting and often dissonant expectations and rule systems governing different academic fields. We use this key concept to suggest ways in which the faculty can support early-career researchers to progress in careers which straddle qualitative social science and medical science.

20.
Trends Ecol Evol ; 39(4): 307-310, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38395671

RESUMEN

Academic writing is difficult, especially for non-native English speakers. We share a perspective on writing with a set of heuristics called the Writing Alphabet, consisting of Accurate, Brief, Clear, Dynamic, Engaging, Flowing, Goal, Habit, and Investment. These points can help struggling writers identify issues and, importantly, internalise good writing practices.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA