A punitive bureaucratic tool or a valuable resource? Using student evaluations to enhance your teaching

Sarah Moore and Nyiel Kuol
University of Limerick
E-mail: sarah.moore@ul.ie

Printer friendly version in PDF

Abstract

This chapter will explore specific ways in which academic faculty can participate in, use and interpret student evaluations of their teaching. It begins with a critical review of the literature on student evaluations of teaching (SET’s), and it uses evidence based data to demonstrate that there are different categories of typical reactions to both positive and negative student evaluations, some (but not all) of which have a helpful effect on subsequent teaching activity and orientations. It explores Johnson’s (2000) important censure of SET’s exploring to what extent and under what circumstances such systems play into the hands of a bureaucratic, mechanistic climate for which higher educational contexts have been increasingly criticised. It also demonstrates Perry’s (1988) observations that different worlds can exist within the same classroom setting and shows how SET’s can be used to explore and to understand these different worlds in more meaningful ways.

The following discussion explores some of the natural defence mechanisms that operate when we review our students’ evaluations of our teaching. Based on extensive experience with the design, development and implementation of a student evaluation of teaching system, it highlights optimum criteria for using SET’s in order to produce positive teaching and learning outcomes. The chapter concludes with a range of practical strategies that academics can adopt in order to use SET’s as a valuable professional development resource.

Introduction

In the educational literature, thousands of research papers have focused on the value and nature of student evaluations of teaching (e.g. Cashin 1988) (e.g. Cashin 1988; Feldman, 1990). In the light of this it is extraordinary that very few of these studies have examined the nature of teacher reaction to such feedback, least of all the impact of such reaction on subsequent efforts to improve the teaching and learning environment in higher educational contexts. With all of the controversy surrounding the application of SET systems in university settings, it seems that the dialogue has been excessively focused on the nature and validity of student feedback without looking at the equally important impact that faculty reaction to that feedback has on subsequent teaching and learning contexts (e.g. Marsh 2000).

This chapter presents a theoretical framework of faculty reaction to student evaluations of teaching. It argues that understanding the range of possible reactions to which SET’s may give rise, can equip institutions and individuals with important perspectives allowing them to use SET-based feedback more effectively than might otherwise be the case. The following discussion presents a brief overview of the SET controversy, outlines a feedback reaction matrix proposed by Moore and Kuol (2005) and sets out a range of pragmatic and research based recommendations associated with the effective, appropriate and culturally sensitive use of SET systems in university settings. The following discussion aims to ensure that institutions and faculty that avail of SET systems within their own work contexts will ensure that they do so with a view to improving the teaching component of their professional lives, notwithstanding the fact that such systems rarely if ever give rise to perfect (or indeed easily interpreted) data.

The SET controversy in academic settings

Whether SET’s give rise to any enhancement of teaching and learning within higher educational settings is a highly contested question. In the context of an environment in which independent action and academic freedom is fiercely protected, the perceived value and validity of student evaluations of teaching (SETs) is at least mixed, and has been the focus of much divisive debate within the last three decades.

Notwithstanding the criticisms and debates surrounding SET’s, the developing focus on quality, accountability and the importance of ‘reflective practice’ in university teaching have driven the increased use of student surveys to evaluate or provide feedback on teacher performance. SET’s are an established part of university feedback systems in the USA, UK, Australia and many European countries. In the Irish context, implementation of this form of feedback has been slower than elsewhere. Most Irish universities either do not have formal, centralised systems for student evaluation of teaching or have only recently introduced them. However, the pressure to establish and mainstream such systems is increasing. This pressure comes from emerging legislative, policy and quality oriented perspectives, and also from some individual and groups of faculty members, many of who have requested or are seeking more structured and objective feedback from their students. This is consistent with the observations that Ashford and Cummings (1983) and others have made about the tendency in many organisational settings for people to seek out performance information from sources other than their immediate superiors.

Given that other countries have been engaged in efforts to moderate or improve SET systems, the analysis of their introduction in a setting in which they have not been previously part of established practice, may yield important and fresh insights that could give rise to better systems in a whole range of environments.

The case against student evaluations of teaching

Some commentators argue that students are not an appropriate or effective source of teacher evaluation. Cashin (1988) proposed that such factors as student motivation and expected grades could bias student evaluations. Tomasco (1980) has argued that student evaluations of teaching are more likely to be ‘personality contests’ rather than valid measures of teaching effectiveness. Others have outlined that student evaluations of teaching can lead to ‘grade inflation’ and a lowering of standards. Calderon et al. (1996) and Green, Calderon, and Reider (1998) highlight that some SET’s require students to respond to performance issues that are beyond or outside their own knowledge and experience bases. For example, asking students to rate their teachers’ level of knowledge will yield only impressions of expertise that may be inaccurate and likely to be moderated by stereotypical associations often found to be linked to demographic features such as age, gender and physical appearance.

Some of the criticisms directed at SET’s as a source of performance information is based then on the idea that students are simply not in a position to evaluate their teachers’ performance. In addition, student perspectives and motivations may give rise to their evaluating lecturers on the basis of their own sense of comfort and satisfaction, thus implicitly encouraging the teaching of less challenging material and the avoidance by teachers of processes that may give rise to high level learning (Murphy 1999). Carey (1993) has presented evidence that points to the risks that SET’s pose in terms of catalysing an increase in standard grading (grade inflation) along with a decrease in course demands (competence deflation). Some commentators have suggested that the costs of introducing SET systems that are effective and efficient may outweigh the benefits to which they are said to give rise, while others suggest that they are only justifiable precisely because they provide low cost alternatives to other forms of evaluation and feedback (Greenwald and Gilmore 1997). But possibly the most serious attack on the use of SET’s in higher educational environments comes from commentators such as Johnson (2000) and Wilson, Lizzo, and Ramsden (1997) who highlight that the motives for installing SET systems in educational contexts are neither educationally sound nor focused on the fulfilment of the goals of either teachers or students. Rather, as Johnson argues, they exist primarily to serve the needs of the bureaucracy in which the systematic reporting of feedback can be conducted on an organisation-wide basis in order to fulfil relatively shallow notions of what teaching quality represents.

Where SET’s have been introduced, they are often rubbished as invalid or damaging, or at best accepted as a necessary evil (C. 1991). Where they have not been introduced, persistent efforts to avoid their introduction are often made (e.g. Whitworth, Price, and Randall 2002). What is clear is that student evaluations of teaching that have any influence on the subsequent rewards received by individual teachers represent a new source of authority that has changed the balance of power within academic institutions. This may indeed be the reason why so many arguments against their introduction have reached both public and scholarly arenas.

The case for student evaluations of teaching

Despite the criticisms and concerns surrounding the implementation of SET’s, student evaluations have also been welcomed and endorsed by a range of commentators. There is plenty of evidence to suggest that students can provide useful information about the effectiveness of teaching methods, equity in the evaluation/teaching process, faculty focus on the student, and faculty enthusiasm and interest in the content of the course or subject (e.g. Stockham and Amann 1994). Much of the debate under-emphasises the important developmental opportunities that student feedback can provide (Hand and Rowe 2001). Furthermore, it is possible that SETs can avoid the proliferation of unrepresentative information and feedback about teaching relying on hearsay and anecdote. Moreover, without a student evaluation of teaching system, feedback from informal, serendipitous sources is likely to be based on individual students’ unequal abilities or opportunities to bring teaching-related issues to the attention of the system (see Murphy 1999). Student evaluation systems that avail of responses from a representative sample of students in a specific class setting, can help to identify the ‘size’ of teaching related problems or issues. And, particularly in large or diverse classroom settings, SET’s that include key demographic information, can identify subsets of students who may be encountering certain difficulties.

Added to these factors are the more general benefits that having and using a teaching-related measurement instrument. SET’s can bestow on the teaching dimension of an academic’s professional role. Given that it is an almost universal phenomenon that research activity reaps more individual rewards than those associated with teaching, efforts to measure the teaching related dimensions of their performance, and to pay attention to those measures in the context of an individual’s professional development helps to create more parity of esteem between the teaching and research components of the academic role. Such a measurement system can, by virtue of its existence, give rise to significant improvements in the undergraduate experience, something that has been the subject of explicit concern at both institutional and policy levels for over two decades (e.g. Radmacher and Martin 2001). Brookfield (1995) has helped to focus the debate by implying that good feedback systems should be formative rather than summative, should recognise that a ‘perfect score’ does not always reflect teaching quality or learning impact, and should be implemented in a context of trust and development rather than fear or censure. Many of these features could be more effectively introduced if we understood more about the nature and impact of faculty reaction to student feedback.

The feedback reaction matrix

Previous research by these authors (see Moore and Kuol 2005) has provided a tentative framework for understanding the variety of orientations that evaluated faculty may adopt with respect to the feedback they receive from their students. Understanding different categories of feedback reaction can provide a useful picture about the likely impact of feedback on a group of participants in a SET system.

The proposed reaction matrix identified the extent to which there exists a match or a mismatch between a faculty members own subjective evaluation of their own teaching and that of the student feedback provided via a SET system. A positive subjective evaluation that is matched with broadly positive feedback from students can be hypothesised to lead to reactions characterised by endorsement and reinforcement with a possible risk of complacency in terms of future performance.

A subjectively negative emphasis combined with broadly positive objective feedback may indicate that individuals in this quadrant are committed to addressing specific aspects of underperformance. This orientation may also be accompanied by the risk that individuals will become ‘fixated’ on relatively unimportant problems, at the expense of otherwise good performance.

The match between a negative focus of both faculty and his/her students can be hypothesised to lead to reactions characterised by a realistic commitment to improvement, but which also risk being accompanied by dismay, dejection and withdrawal from a commitment to developing teaching effectiveness.

Finally, broadly negative student evaluations accompanied by a positive subjective focus on one’s own teaching may provide important indicators about the different value positions adopted by teachers and students within the same classroom setting, invoking Perry’s (1988) descriptions of different worlds at play in the same learning setting. Another possible explanation of negative feedback accompanied by a positive focus could indicate a form of denial. This may be the kind of reaction that is most difficult to address. Alternatively, this reaction may represent a functional strategy which can serve to protect an individual’s self esteem in the face of student dissatisfaction over which the individual teacher perceives that he/she has little or no control.


Table 1: Theoretical orientations towards feedback based on the interactions between subjective and objective evaluative emphasis
Positive SETs Negative SETs
Quadrant 1 Quadrant 2
Positive self-evaluation Endorsement of performance.
Reinforcement of current practice.
Ego - protection
Maintenance of sense of efficacy
Identification of a difference of value position between teacher and students
Risk: complacency and focus on other areas of professional development Risk: Intransigent denial of real problems
Quadrant 3 Quadrant 4
Negative self-evaluation Commitment to addressing minor problem areas Realistic analysis of and commitment to improvement and or repair strategy
Risk: Excessive fixation on small teaching problems at the expense of other areas of established competence Risk: Dismay, dejection, discouragement and possible withdrawal

Thus, according to this empirically derived framework, student feedback of any kind can give rise to both positive and negative responses from faculty. These reactions may be contingent on the extent to which teachers’ own self-evaluations match those of their students. Reactions of endorsement, ego-protection, problem solving and repair can all contribute to a more positive learning environment, but the risks that faculty will respond with complacency, denial, fixation or dismay are possibilities that haunt every academic setting, and ones that threaten to have a detrimental impact on a wide variety of teaching and learning experiences.

An institutional and individual awareness of the possible range of reactions to student feedback can empower those in educational settings to use SETs in more sensitive, appropriate and effective ways. Based on an analysis of the qualitative responses of SET participants and on a review of the current literature exploring the validity of SETs the second part of this chapter highlights the features of good SET systems. It proposes a range of individual guidelines that can help faculty members to manage their reactions in a way that will be more likely to give rise to genuine professional development.

Individual strategies for analysing student feedback:

  1. Control your defence mechanisms. Ask yourself: What kinds of reactions am I having to this feedback and what is it likely to make me do in future? Make explicit the implicit emotions to which the feedback is giving rise.
  2. Analyse the source of your students’ reactions in a way that sheds light on any issues and problems that have been identified. Ask yourself: What are the reasons behind both the positive and negative feedback provided by the students? Whether or not you can answer these questions easily, try to pursue information via other methodologies (e.g. focus groups; one-to-one interviews, facilitated by objective information gatherers). Remember to focus just as assiduously on the reasons behind positive as well as negative feedback, keeping in mind that it can be just as professionally damaging not to know why students think you have done well, as it is not to know why they think you have done badly.
  3. Work hard not to under-react or over-react to information that you receive via SET feedback. Ask yourself: What are the changes that would enhance student learning, versus the ones that would have neutral or negative impact on learning? Try to differentiate between the implications of different changes implied by the feedback.
  4. Divide the issues raised by students into actionable and non-actionable categories. Ask yourself: What aspects of this feedback can I do something about? What aspects of this feedback require a wider institutional, administrative or resource based reaction? Integrate these categories into your teaching enhancement strategy. Simply put, it’s important that you don’t justify anything identified by your students that that is unjustifiable about your current teaching approaches, but equally that you don’t allow yourself to become the scapegoat for issues that clearly need to be tackled at an institutional level.
  5. Communicate with students before and after their provision of feedback. Ask yourself: how can I use the SET system to improve communication and to create constructive dialogue with my students ? Do not appear to ignore students’ participation in the SET system. Register with them that you are aware of their impending participation in the feedback system and encourage them to take part as honestly and constructively as possible. And when the results come in, devote a short session of one of your lectures to presenting the summary data and explaining to your students what you will and will not be doing as a result of the feedback they have provided. Student satisfaction levels can be significantly increased via this kind of non- defensive, honest and reasonable communication. Ensure that they know that no negative or recriminatory outcomes will be associated with their participation.
  6. Do not make the simplistic assumption that all positive responses are related to good teaching and all negative responses are related to bad teaching. Ask yourself: What parts of this feedback most robustly indicate where my teaching strengths and weaknesses lie? As outlined earlier in this chapter, much of the literature on SET’s cautions against the risk of giving rise to negative learning outcomes in the pursuit of positive ratings. Some negative student reactions to your teaching may be related to a vital part of their learning journey. This negative feedback can provide the basis for an enhanced dialogue to help secure higher levels of student motivation and commitment. Also be strict about assuming that positive ratings are always related to good teaching. As outlined earlier, the literature shows that there are moderators of student satisfaction that relate to other factors such as disciplinary background, class size, student demographics and timing of feedback.
  7. Remember that small changes can have big effects. Ask yourself: What initial small changes can I make based on the feedback that I have received that might have immediate and positive effects on my students’ learning experiences in this learning setting? While not all changes implied by the feedback will be easy or short term, it’s a good idea to identify some ‘low lying fruit’. Most participants in a SET system can identify one or two small changes that are relatively easy to effect and that can indicate to students that you have heard their voices and are registering their feedback through immediate action. This can create positive momentum for more fundamental or strategic changes to your teaching styles and approaches.
  8. Develop a teaching enhancement strategy that takes into account the SET feedback. Ask yourself: what are my long term teaching goals and how can this feedback help me to achieve them? Within a short time of receiving the feedback, allocate a dedicated period of time in your schedule to develop a longer term teaching enhancement strategy. This strategy might include plans to receive more feedback later in the semester or year, specific professional development interventions that you’d like to avail of, more communication with other key members of your teaching network (heads of department, IT specialists, researchers in your field, librarians, student advisers, study skills experts and so on), and enhanced student assessment strategies.

Institutional issues for the design of a student evaluation system

Individual teachers can achieve enormous advances in their own teaching strategies if they resolve to engage in a functional and positive way with the feedback that they receive through Student Evaluation of Teaching Systems. However, functional, healthy and emotionally intelligent responses to SET’s can be significantly facilitated or prohibited by the institutional approach to managing a SET system. Based on an analysis of the literature that has been outlined earlier, and on gathering the opinions of participating students and teachers, we recommend that standardised SET systems should be characterised by the following important features:

SETs should be:

Conclusions

Too often, SET systems have been compulsory, publicly displayed, uncontextualised, unsupported, simplistic and interpreted in isolated ways, features which render SET’s punitive bureaucratic tools rather than supportive mechanisms through which enhanced learning environments can be created and sustained. Furthermore these characteristics are particularly inappropriate in academic environments, the very contexts in which people are encouraged to adopt critical stances to one-dimensional or naive approaches to data gathering. In order for a SET system to become a positive, value added and effective mechanism, it must help teachers and learners to enhance the complex dynamics that occur in higher level educational settings. It should avoid unsophisticated, knee-jerk analysis and it should promote trust and positive dialogue between student and teacher in a way that gives rise to a better learning culture. This chapter has provided a set of recommendations that we hope will help to prevent SET’s from acting as punitive, bureaucratic instruments of control but rather to ensure that they are more likely to act as a valuable resource for teachers and their students in the ongoing journey of professional development.

References

   Ashford, S. J. and L. L. Cummings (1983). Feedback as an individual resource: Personal strategies for creating information. Organisational Behaviour and Human Performance 32, 370-398.

   Brookfield, S. (1995). On becoming a critically reflective teacher. San Francisco: Jossey-Bass.

   C., O. J. (1991). Changes in evaluating teaching in higher education. Theory into practice 30(1), 30-36.

   Calderon, T. G., Gabbin, A. L., and B. P. Green (1996). Report of the committee on promoting and evaluating effective teaching. Virginia, U.S.A.: James Madison University Press.

   Carey, G. (1993). Thoughts on the lesser evil: student evaluations. Perspectives on political science 22(1), 17-20.

   Cashin, W. E. (1988). Student ratings of teaching: a summary of the research. IDEA Paper No. 20. Manhatten: Kansas State University, Centre for Faculty Evaluation and Development.

   Green, B. P., T. G. Calderon, and B. P. Reider (1998, February). A content analysis of teaching evaluation instruments used in accounting departments. Issues in Accounting Education, 15-30.

   Greenwald, A. and G. Gilmore (1997). Grading leniency is a removable contaminant of student ratings. American psychologist 52(11), 1209-1217.

   Hand, L. and M. Rowe (2001). Evaluation of student feedback. Accounting Education 10(2), 147-160.

   Johnson, R. (2000). The authority of the student evaluation questionnaire. Teaching in Higher Education 5(4), 419-434.

   Moore, S. and N. Kuol (2005). Students evaluating teachers: exploring the importance of faculty reaction to feedback on teaching. Teaching in Higher Education 10(1).

   Murphy, A. (1999). Enhancing the motivation for good teaching with an improved system of evaluation. Financial Practice and Education Fall/Winter, 100-104.

   Perry, W. G. (1988). Different worlds in the same classroom. In P. Ramsden (Ed.), Improving learning: new perspectives. New Jersey: Nichols.

   Radmacher, S. A. and D. J. Martin (2001). Identifying significant predictors of student evaluations of faculty through hierarchical regression analysis. The Journal of Psychology 135(3), 259-268.

   Stockham, S. L. and J. F. Amann (1994). Facilitated student feedback to improve teaching and learning. Journal of Veterinary Medical Education 21(2).

   Tomasco, A. T. (1980). Student perceptions of instructional and personality characteristics of faculty: A canonical analysis. Teaching of Psychology 7, 79-82.

   Whitworth, J. E., B. A. Price, and C. H. Randall (2002). Factors that affect business student opinion of teaching and learning. Journal of Education for Business (May/June), 282-289.

   Wilson, K. L., A. Lizzo, and P. Ramsden (1997). The development, validation and application of the Course Experience Questionnaire. Studies in Higher Education 22(1), 33-54.