About Open Access
Assessment of Clinical Teaching: Developing an instrument to improve feedback to clinical teachers (Abstract).
Lookup NU author(s)
Emeritus Professor John Spencer
Professor Roger Barton
Gardiner N, Corbett S, Cotterill S, Spencer J, Barton JR
Conference Proceedings (inc. Abstract)
ASME Annual Scientific Meeting 2009
The Royal College of Physicians, Edinburgh
Year of Conference
Source Publication Date
15-17 July 2009
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
BackgroundLess time in training before reaching senior grades, due to reduced working hours and changes in consultant contracts, is driving improvements in the quality of clinical training
. Clinical trainers can have a significant influence on learning outcomes
, but many receive no training for this role
.Current evaluation instruments provide limited feedback to help improve teaching performance. Most list ‘high-inference’ characteristics, requiring raters to make judgements about the trainer, but only the higher points on the scale tend to be used reducing their ability to discriminate between teachers. ‘Low-inference’ observations of highly specific behaviours provide more useful feedback to teachers
. Evaluation instruments tend to apply to whole rotations or semesters, and respondents’ scores are aggregated, thus reducing the specificity of the feedback they provide.We have developed a psychometrically sound checklist of 38 observable low-inference behaviours to evaluate a single teaching episode. Scores are normally distributed and can be compared with those of peers. This provides trainers with graphical feedback from the perspective of each person present (trainers, trainees and other observers) that could be used to inform reflection upon and modify their teaching.
We aimed to explore the use of the programme and the acceptability and utility of the feedback provided.
A web-based cross-informant programme presenting items in randomised order was designed to record behaviour observed during a single teaching episode. Participation in the evaluation was recorded in Trainees e-portfolios. Once data entry was completed for each episode or after a period of two weeks, trainers were able to request feedback on their teaching performance. Trainers were invited to participate in a brief interview to investigate the use of the checklist, and the usefulness of the feedback in reflecting upon and modifying their teaching. Interviews were analysed using thematic analysis.
The web-based programme will be presented at conference. Initial findings from qualitative evaluation of the checklist were that it reminded trainers about alternative teaching strategies. New themes will be presented.DiscussionWe have developed a web-based evaluation instrument to provide fine-grained feedback that identifies strengths and weaknesses of clinical teachers from the perspective of all those present and allows trainers to compare their performance with that of their peers. Trainers have found feedback useful to reflect upon and modify their teaching.
1. Epstein J. Reduced working hours and surgical training in the UK. Surgery 2004;22:i-ii. 2. Griffith CH, Wilson JF, Haist SA, Ramsbottom-Lucier M. Relationships of how well attending physicians teach to their students performance and residency choices. Acad Med 1997;72 (10 suppl):S118-S120 3. Cottrell D, Kilminster S. Jolly B, Grant J. What is effective supervision and how does it happen? A critical incident study. Med Educ 2002;36:1042-1049. 4. Murray HG. Low-Inference classroom teaching behaviours and student ratings of college teaching effectiveness. J Educ Psychol 1983;75:138-149.
Association for the Study of Medical Education
Newcastle University Library, NE2 4HQ, United Kingdom. Tel: 0044 (191) 208 2920
©2018 Newcastle University Library