Toggle Main Menu Toggle Search

Open Access padlockePrints

Assessment of Clinical Teaching: Developing an instrument to improve feedback to clinical teachers (Abstract).

Lookup NU author(s): Simon Cotterill, Emeritus Professor John Spencer, Professor Roger Barton

Downloads

Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


Abstract

BackgroundLess time in training before reaching senior grades, due to reduced working hours and changes in consultant contracts, is driving improvements in the quality of clinical training1. Clinical trainers can have a significant influence on learning outcomes2, but many receive no training for this role3.Current evaluation instruments provide limited feedback to help improve teaching performance. Most list ‘high-inference’ characteristics, requiring raters to make judgements about the trainer, but only the higher points on the scale tend to be used reducing their ability to discriminate between teachers. ‘Low-inference’ observations of highly specific behaviours provide more useful feedback to teachers4. Evaluation instruments tend to apply to whole rotations or semesters, and respondents’ scores are aggregated, thus reducing the specificity of the feedback they provide.We have developed a psychometrically sound checklist of 38 observable low-inference behaviours to evaluate a single teaching episode. Scores are normally distributed and can be compared with those of peers. This provides trainers with graphical feedback from the perspective of each person present (trainers, trainees and other observers) that could be used to inform reflection upon and modify their teaching.AimWe aimed to explore the use of the programme and the acceptability and utility of the feedback provided.MethodsA web-based cross-informant programme presenting items in randomised order was designed to record behaviour observed during a single teaching episode. Participation in the evaluation was recorded in Trainees e-portfolios. Once data entry was completed for each episode or after a period of two weeks, trainers were able to request feedback on their teaching performance. Trainers were invited to participate in a brief interview to investigate the use of the checklist, and the usefulness of the feedback in reflecting upon and modifying their teaching. Interviews were analysed using thematic analysis.ResultsThe web-based programme will be presented at conference. Initial findings from qualitative evaluation of the checklist were that it reminded trainers about alternative teaching strategies. New themes will be presented.DiscussionWe have developed a web-based evaluation instrument to provide fine-grained feedback that identifies strengths and weaknesses of clinical teachers from the perspective of all those present and allows trainers to compare their performance with that of their peers. Trainers have found feedback useful to reflect upon and modify their teaching.References 1. Epstein J. Reduced working hours and surgical training in the UK. Surgery 2004;22:i-ii. 2. Griffith CH, Wilson JF, Haist SA, Ramsbottom-Lucier M. Relationships of how well attending physicians teach to their students performance and residency choices. Acad Med 1997;72 (10 suppl):S118-S120 3. Cottrell D, Kilminster S. Jolly B, Grant J. What is effective supervision and how does it happen? A critical incident study. Med Educ 2002;36:1042-1049. 4. Murray HG. Low-Inference classroom teaching behaviours and student ratings of college teaching effectiveness. J Educ Psychol 1983;75:138-149.


Publication metadata

Author(s): Gardiner N, Corbett S, Cotterill S, Spencer J, Barton JR

Publication type: Conference Proceedings (inc. Abstract)

Publication status: Published

Conference Name: ASME Annual Scientific Meeting 2009

Year of Conference: 2009

Publisher: Association for the Study of Medical Education

URL: http://www.asme.org.uk/images/ABSTRACTS_09.pdf


Share