Show simple item record

dc.contributor.authorRoberts, Kirsten Colleenen_US
dc.date.accessioned2009-10-06T18:11:47Z
dc.date.available2009-10-06T18:11:47Z
dc.date.issued2007en_US
dc.identifier.otherb59638400en_US
dc.identifier.other234185939en_US
dc.identifier.otherThesis 58027en_US
dc.identifier.urihttp://hdl.handle.net/1773/7727
dc.descriptionThesis (Ph. D.)--University of Washington, 2007.en_US
dc.description.abstractMedical education is seeing a renewed emphasis on clinical skills proficiency. Unlike medical knowledge assessment, clinical skills cannot be adequately assessed using paper-and-pencil tests. The Objective Structured Clinical Examination (OSCE) is now generally accepted as a means of assessing medical students' clinical skills proficiency. Given the growing popularity of the OSCE, research is needed to support the validity and reliability of scores from these performance based assessments.Data for this study were obtained from the 2005 and 2006 senior OSCE administrations at the University of Washington, School of Medicine. Scores were obtained for 346 students. The four medical cases common to both years were identified and used in this study. OSCE validity and reliability issues were explored using classical test theory principles and structural equation modeling techniques.Several research questions were of interest. Are the OSCE checklist scores measuring a unitary dimension or multiple dimensions of clinical skills? The results of the structural analysis and a confirmatory factor analysis suggested that the OSCE checklists were measuring multiple dimensions. A single overall rating was assigned to each medical case; however, a score for each dimension might have provided a more accurate assessment of the student's clinical skills and would have allowed for more effective formative instruction.Are all items on the OSCE checklist equally predictive of final judgment about student proficiency? The data suggested that, in some cases, items from the write-up component of the medical case checklist appeared to have more influence on the overall score than did the items from the standardized patient encounter checklist.How well do OSCE scores from a refined checklist predict other judgments of student proficiency? The scores from the revised subscales were correlated with USMLE Step 2 Clinical Knowledge (CK) scores for discriminant evidence of construct validity. Although several of the subscales did correlate with USMLE Step 2 CK scores at a statistically significant level, these correlations were not high enough to be valuable in a practical sense.en_US
dc.format.extentv, 109 p.en_US
dc.language.isoen_USen_US
dc.rightsCopyright is held by the individual authors.en_US
dc.rights.urien_US
dc.subject.otherTheses--Educationen_US
dc.titleA validity and reliability study of the objective structured clinical examinationen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record