Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Int J Health Policy Manag ; 7(9): 782-790, 2018 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-30316226

RESUMEN

BACKGROUND: National licensing examinations (NLEs) are large-scale examinations usually taken by medical doctors close to the point of graduation from medical school. Where NLEs are used, success is usually required to obtain a license for full practice. Approaches to national licensing, and the evidence that supports their use, varies significantly across the globe. This paper aims to develop a typology of NLEs, based on candidacy, to explore the implications of different examination types for workforce planning. METHODS: A systematic review of the published literature and medical licensing body websites, an electronic survey of all medical licensing bodies in highly developed nations, and a survey of medical regulators. RESULTS: The evidence gleaned through this systematic review highlights four approaches to NLEs: where graduating medical students wishing to practice in their national jurisdiction must pass a national licensing exam before they are granted a license to practice; where all prospective doctors, whether from the national jurisdiction or international medical graduates, are required to pass a national licensing exam in order to practice within that jurisdiction; where international medical graduates are required to pass a licensing exam if their qualifications are not acknowledged to be comparable with those students from the national jurisdiction; and where there are no NLEs in operation. This typology facilitates comparison across systems and highlights the implications of different licensing systems for workforce planning. CONCLUSION: The issue of national licensing cannot be viewed in isolation from workforce planning; future research on the efficacy of national licensing systems to drive up standards should be integrated with research on the implications of such systems for the mobility of doctors to cross borders.


Asunto(s)
Competencia Clínica , Países Desarrollados , Educación Médica , Licencia Médica , Facultades de Medicina , Humanos , Competencia Clínica/normas , Educación Médica/clasificación , Educación Médica/normas , Evaluación Educacional/normas , Internacionalidad , Licencia Médica/clasificación , Licencia Médica/normas , Médicos/normas , Facultades de Medicina/clasificación , Facultades de Medicina/normas , Consejos de Especialidades/normas
2.
BMC Med Educ ; 18(1): 126, 2018 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-29879954

RESUMEN

BACKGROUND: Standard setting is one of the most contentious topics in educational measurement. Commonly-used methods all have well reported limitations. To date, there is not conclusive evidence suggesting which standard setting method yields the highest validity. METHODS: The method described and piloted in this study asked expert judges to estimate the scores on a real MCQ examination that they consider indicated a clear pass, clear fail, and pass mark for the examination as a whole. The mean and SD of the judges responses to these estimates, Z scores and confidence intervals were used to derive the cut-score and the confidence in it. RESULTS: In this example the new method's cut-score was higher than the judges' estimate. The method also yielded estimates of statistical error which determine the range of the acceptable cut-score and the estimated level of confidence one may have in the accuracy of that cut-score. CONCLUSIONS: This new standard-setting method offers some advances, and possibly advantages, in that the decisions being asked of judges are based on firmer constructs, and it takes into account variation among judges.


Asunto(s)
Educación Médica/normas , Evaluación Educacional/normas , Estudiantes de Medicina , Australia , Competencia Clínica , Intervalos de Confianza , Toma de Decisiones , Estudios de Factibilidad , Humanos , Proyectos Piloto , Estándares de Referencia , Reproducibilidad de los Resultados , Encuestas y Cuestionarios/normas
3.
BMC Med Educ ; 16(1): 212, 2016 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-27543269

RESUMEN

BACKGROUND: To investigate the existing evidence base for the validity of large-scale licensing examinations including their impact. METHODS: Systematic review against a validity framework exploring: Embase (Ovid Medline); Medline (EBSCO); PubMed; Wiley Online; ScienceDirect; and PsychINFO from 2005 to April 2015. All papers were included when they discussed national or large regional (State level) examinations for clinical professionals, linked to examinations in early careers or near the point of graduation, and where success was required to subsequently be able to practice. Using a standardized data extraction form, two independent reviewers extracted study characteristics, with the rest of the team resolving any disagreement. A validity framework was used as developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education to evaluate each paper's evidence to support or refute the validity of national licensing examinations. RESULTS: 24 published articles provided evidence of validity across the five domains of the validity framework. Most papers (n = 22) provided evidence of national licensing examinations relationships to other variables and their consequential validity. Overall there was evidence that those who do well on earlier or on subsequent examinations also do well on national testing. There is a correlation between NLE performance and some patient outcomes and rates of complaints, but no causal evidence has been established. CONCLUSIONS: The debate around licensure examinations is strong on opinion but weak on validity evidence. This is especially true of the wider claims that licensure examinations improve patient safety and practitioner competence.


Asunto(s)
Países Desarrollados , Educación de Postgrado en Medicina/normas , Internado y Residencia/normas , Licencia Médica , Competencia Clínica/normas , Atención a la Salud/normas , Evaluación Educacional , Medicina Basada en la Evidencia , Humanos , Licencia Médica/normas , Licencia Médica/tendencias
4.
BMC Med Educ ; 16: 191, 2016 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-27455964

RESUMEN

BACKGROUND: The Objective Structured Clinical Examination (OSCE) is now a standard assessment format and while examiner training is seen as essential to assure quality, there appear to be no widely accepted measures of examiner performance. METHODS: The objective of this study was to determine whether the routine training provided to examiners improved their accuracy and reduced their mental workload. Accuracy was defined as the difference between the rating of each examiner and that of an expert group expressed as the mean error per item. At the same time the mental workload of each examiner was measured using a previously validated secondary task methodology. RESULTS: Training was not associated with an improvement in accuracy (p = 0.547) and that there was no detectable effect on mental workload. However, accuracy was improved after exposure to the same scenario (p < 0.001) and accuracy was greater when marking an excellent compared to a borderline performance. CONCLUSIONS: This study suggests that the method of training OSCE examiners studied is not effective in improving their performance, but that average item accuracy and mental workload appear to be valid methods of assessing examiner performance.


Asunto(s)
Evaluación Educacional/normas , Competencia Profesional/normas , Análisis y Desempeño de Tareas , Carga de Trabajo/psicología , Humanos , Proyectos Piloto , Tiempo de Reacción , Reproducibilidad de los Resultados , Proyectos de Investigación , Estudiantes de Medicina
5.
BMC Med Educ ; 16: 34, 2016 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-26821741

RESUMEN

BACKGROUND: Fixed mark grade boundaries for non-linear assessment scales fail to account for variations in assessment difficulty. Where assessment difficulty varies more than ability of successive cohorts or the quality of the teaching, anchoring grade boundaries to median cohort performance should provide an effective method for setting standards. METHODS: This study investigated the use of a modified Hofstee (MH) method for setting unsatisfactory/satisfactory and satisfactory/excellent grade boundaries for multiple choice question-style assessments, adjusted using the cohort median to obviate the effect of subjective judgements and provision of grade quotas. RESULTS: Outcomes for the MH method were compared with formula scoring/correction for guessing (FS/CFG) for 11 assessments, indicating that there were no significant differences between MH and FS/CFG in either the effective unsatisfactory/satisfactory grade boundary or the proportion of unsatisfactory graded candidates (p > 0.05). However the boundary for excellent performance was significantly higher for MH (p < 0.01), and the proportion of candidates returned as excellent was significantly lower (p < 0.01). MH also generated performance profiles and pass marks that were not significantly different from those given by the Ebel method of criterion-referenced standard setting. CONCLUSIONS: This supports MH as an objective model for calculating variable grade boundaries, adjusted for test difficulty. Furthermore, it easily creates boundaries for unsatisfactory/satisfactory and satisfactory/excellent performance that are protected against grade inflation. It could be implemented as a stand-alone method of standard setting, or as part of the post-examination analysis of results for assessments for which pre-examination criterion-referenced standard setting is employed.


Asunto(s)
Educación de Pregrado en Medicina/normas , Evaluación Educacional/normas , Estudiantes de Medicina , Educación de Pregrado en Medicina/estadística & datos numéricos , Evaluación Educacional/métodos , Evaluación Educacional/estadística & datos numéricos , Humanos , Modelos Educacionales , Reino Unido
6.
Med Teach ; 38(3): 250-4, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26474218

RESUMEN

It is incumbent on medical schools to show, both to regulatory bodies and to the public at large, that their graduating students are "fit for purpose" as tomorrow's doctors. Since students graduate by virtue of passing assessments, it is vital that schools quality assure their assessment procedures, standards, and outcomes. An important part of this quality assurance process is the appropriate use of psychometric analyses. This begins with development of an empowering, evidence-based culture in which assessment validity can be demonstrated. Preparation prior to an assessment requires the establishment of appropriate rules, test blueprinting and standard setting. When an assessment has been completed, the reporting of test results should consider reliability, assessor, demographic, and long-term analyses across multiple levels, in an integrated way to ensure the information conveyed to all stakeholders is meaningful.


Asunto(s)
Psicometría/métodos , Psicometría/normas , Facultades de Medicina/organización & administración , Factores de Edad , Curriculum , Humanos , Control de Calidad , Reproducibilidad de los Resultados , Facultades de Medicina/normas , Factores Sexuales , Factores Socioeconómicos
7.
Psychiatr Bull (2014) ; 38(5): 236-42, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25285223

RESUMEN

Aims and method To investigate medical students' performance at and perceptions of the mental state examination (MSE) at a medical school with a modern integrated curriculum. We undertook an evaluative case study comprising a survey and analysis of performance data. The study is presented in two parts: part 1 discusses the students' perceptions of the MSE and the teaching, learning and practising of it. Results Most students in the study group considered the MSE an important examination in medicine. Other perceptions grouped in themes are presented. Unsurprisingly, most students found psychiatric attachments the most useful part of the course for learning about the MSE. About a half of students had witnessed an MSE being undertaken in clinical practice. Clinical implications Although students appear to recognise the importance of this examination in medicine, the teaching and learning of it possibly needs greater emphasis in the undergraduate curriculum, and teaching and learning opportunities improved throughout the course.

8.
Psychiatr Bull (2014) ; 38(5): 243-8, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25285224

RESUMEN

Aims and method To investigate medical students' performance at and perceptions of the mental state examination (MSE) at a medical school with a modern integrated curriculum. We undertook an evaluative case study comprising a survey and analysis of performance data. The study is presented in two parts: part 2 reports the students' performance data as assessed by integrated structured clinical examination (ISCE). Results About a third of students (32.7%) thought that the MSE ISCE was more difficult than the non-MSE ISCE from the questionnaire data. The evidence from the ISCE performance data indicates that there are no significant differences between the scores of students in the MSE station and the non-MSE stations. Clinical implications Most studnets do not find the MSE ISCE station more difficult than other ISCE stations. Perhaps therefore students should be reassured that assessments in psychiatry are just like other assessments in medicine. For some students, however, performing at the MSE ISCE station is a more complex challenge.

9.
J Dent Educ ; 76(4): 487-94, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22473561

RESUMEN

The significance of the educational environment in health professions academic institutions, increasingly recognized on a global scale, is fundamental to effective student learning. This study was carried out to evaluate students' perceptions of the educational environment in five undergraduate dental institutions in Pakistan. This non-interventional study used a postal questionnaire based on the Dundee Ready Educational Environment Measure (DREEM). The subjects were dental students taking the final professional B.D.S. examination at five dental institutions affiliated with the University of Health Sciences, Lahore, Pakistan. A total of 197 students participated in the study (response rate of 83.82 percent). The overall DREEM score was 115.06 (Cronbach's alpha 0.87). Nine items recorded scores <2 and were flagged for remediation. Significant differences were observed between students' perceptions of learning and of teachers (p<0.05). Many issues challenge the quality and delivery of dental education in Pakistan, and dental institutions need to develop robust mechanisms to incorporate contemporary international trends in dental education in order to improve the educational environment.


Asunto(s)
Actitud , Ambiente , Aprendizaje , Facultades de Odontología , Estudiantes de Odontología/psicología , Adulto , Educación en Odontología , Docentes de Odontología , Femenino , Humanos , Relaciones Interpersonales , Masculino , Pakistán , Autoimagen , Factores Sexuales , Percepción Social , Encuestas y Cuestionarios , Adulto Joven
10.
BMJ Qual Saf ; 20(8): 711-7, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21676948

RESUMEN

BACKGROUND: In 2008, the WHO produced a surgical safety checklist against a background of a poor patient safety record in operating theatres. Formal team briefings are now standard practice in high-risk settings such as the aviation industry and improve safety, but are resisted in surgery. Research evidence is needed to persuade the surgical workforce to adopt safety procedures such as briefings. OBJECTIVE: To investigate whether exposure to pre-surgery briefings is related to perception of safety climate. METHODS: Three Safety Attitude Questionnaires, completed by operating theatre staff in 2003, 2004 and 2006, were used to evaluate the effects of an educational intervention introducing pre-surgery briefings. RESULTS: Individual practitioners who agree with the statement 'briefings are common in the operating theatre' also report a better 'safety climate' in operating theatres. CONCLUSIONS: The study reports a powerful link between briefing practices and attitudes towards safety. Findings build on previous work by reporting on the relationship between briefings and safety climate within a 4-year period. Briefings, however, remain difficult to establish in local contexts without appropriate team-based patient safety education. Success in establishing a safety culture, with associated practices, may depend on first establishing unidirectional, positive change in attitudes to create a safety climate.


Asunto(s)
Actitud del Personal de Salud , Lista de Verificación , Quirófanos/organización & administración , Administración de la Seguridad/organización & administración , Procedimientos Quirúrgicos Operativos/métodos , Humanos , Errores Médicos/prevención & control , Cultura Organizacional
11.
Med Teach ; 32(6): 464-6, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515373

RESUMEN

BACKGROUND: To use progress testing, a large bank of questions is required, particularly when planning to deliver tests over a long period of time. The questions need not only to be of good quality but also balanced in subject coverage across the curriculum to allow appropriate sampling. Hence as well as creating its own questions, an institution could share questions. Both methods allow ownership and structuring of the test appropriate to the educational requirements of the institution. METHOD: Peninsula Medical School (PMS) has developed a mechanism to validate questions written in house. That mechanism can be adapted to utilise questions from an International question bank International Digital Electronic Access Library (IDEAL) and another UK-based question bank Universities Medical Assessment Partnership (UMAP). These questions have been used in our progress tests and analysed for relative performance. RESULTS: Data are presented to show that questions from differing sources can have comparable performance in a progress testing format. CONCLUSION: There are difficulties in transferring questions from one institution to another. These include problems of curricula and cultural differences. Whilst many of these difficulties exist, our experience suggests that it only requires a relatively small amount of work to adapt questions from external question banks for effective use. The longitudinal aspect of progress testing (albeit summatively) may allow more flexibility in question usage than single high stakes exams.


Asunto(s)
Evaluación Educacional/normas , Facultades de Medicina , Humanos , Reproducibilidad de los Resultados , Reino Unido
12.
Med Teach ; 32(6): 486-90, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515378

RESUMEN

BACKGROUND: Progress testing is used at Peninsula Medical School to test applied medical knowledge four times a year using a 125-item multiple choice test. Items within each test are classified and matched to the curriculum blueprint. AIM: To examine the use of item classifications as part of a quality assurance process and to examine the range of available feedback provided after each test or group of tests. METHODS: The questions were classified using a single best classification method. These were placed into a simplified version of the progress test assessment blueprint. Average item facilities for individuals and cohorts were used to provide feedback to individual students and curriculum designers. RESULTS: The analysis shows that feedback can be provided at a number of levels, and inferences about various groups can be made. It demonstrates that learning mostly occurs in the early years of the course, but when examined longitudinally, it shows how different patterns of learning exist in different curriculum areas. It also shows that the effect of changes in the curriculum may be monitored through these data. CONCLUSIONS: Used appropriately, progress testing can provide a wide range of feedback to every individual or group of individuals in a medical school.


Asunto(s)
Evaluación Educacional/normas , Retroalimentación , Estudios de Evaluación como Asunto , Humanos , Facultades de Medicina , Estudiantes de Medicina , Reino Unido
13.
Med Teach ; 32(6): 500-2, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515381

RESUMEN

Although progress testing (PT) is well established in several medical schools, it is new to dentistry. Peninsula College of Medicine and Dentistry has recently established a Bachelor of Dental Surgery programme and has been one of the first schools to use PT in a dental setting. Issues associated with its development and of its adaption to the specific needs of the dental curriculum are considered.


Asunto(s)
Educación en Odontología , Evaluación Educacional , Humanos , Reino Unido
14.
Med Teach ; 32(6): 513-5, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515384

RESUMEN

This article is primarily an opinion piece which aims to encourage debate and future research. There is little theoretical or practical research on how best to design progress tests. We propose that progress test designers should be clear about the primary purpose of their assessment. We provide some empirical evidence about reliability and cost based upon generalisability theory. We suggest that the future research is needed in the areas of educational impact and acceptability.


Asunto(s)
Toma de Decisiones , Evaluación Educacional , Modelos Teóricos , Reproducibilidad de los Resultados
15.
Adv Health Sci Educ Theory Pract ; 15(2): 265-75, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-19763855

RESUMEN

The purpose of multiple choice tests of medical knowledge is to estimate as accurately as possible a candidate's level of knowledge. However, concern is sometimes expressed that multiple choice tests may also discriminate in undesirable and irrelevant ways, such as between minority ethnic groups or by sex of candidates. There is little literature to establish whether multiple choice tests may also discriminate against students with specific learning disabilities (SLDs), in particular those with a diagnosis of dyslexia, and whether the commonly-used accommodations allow such students to perform up to their capability. We looked for evidence to help us determine whether multiple choice tests could be relied upon to test all medical students fairly, regardless of disability. We analyzed the mean scores of over 900 undergraduate medical students on eight multiple-choice progress tests containing 1,000 items using a repeated-measures analysis of variance. We included disability, gender and ethnicity as possible explanatory factors, as well as year group. There was no significant difference between mean scores of students with an SLD who had test accommodations and students with no SLD and no test accommodation. Virtually all students were able to complete the tests within the allowed time. There were no significant differences between the mean scores of known minority ethnic groups or between the genders. We conclude that properly-designed multiple-choice tests of medical knowledge do not systematically discriminate against medical students with specific learning disabilities.


Asunto(s)
Educación Médica/métodos , Evaluación Educacional/métodos , Discapacidades para el Aprendizaje/psicología , Estudiantes de Medicina/psicología , Femenino , Humanos , Masculino
16.
Med Educ ; 43(6): 589-93, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19493184

RESUMEN

OBJECTIVES: There has been little work on standard setting for progress tests and it is common practice to use normative standards. This study aimed to develop a new approach to standard setting for progress tests administered at the point when students approach graduation. METHODS: In this study we obtained performance data from newly qualified doctors and used this information to set the standard for the last progress test in the final year of undergraduate medical education. This external reference was validated against projections of student performance data based upon normative grading, and other published results. A simple linear growth model was used to set pass scores for progress tests earlier in the final year and this was also validated by published data. RESULTS: There was good agreement between standards set using the data from newly qualified doctors, the standard expected from extrapolation of the student progression data, and published performance data from another medical school. CONCLUSIONS: We have demonstrated that a combination of data from independent sources can be used to triangulate standard-setting decisions for progress tests. Performance data from successive cohorts of medical students could provide a fruitful source of information for standard setting for progress tests.


Asunto(s)
Benchmarking/normas , Educación de Postgrado en Medicina/normas , Evaluación Educacional/normas , Evaluación de Programas y Proyectos de Salud/normas , Competencia Clínica/normas , Evaluación Educacional/métodos , Humanos , Estándares de Referencia , Estadística como Asunto , Estudiantes de Medicina
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA