When information about the result of a transformation or an action is sent back to the input(s) to the system, the process is called a ‘feedback loop’:
The diagram looks like this in relation to formative and summative evaluation systems.
The metaphor should not be overstrained—in classical systems theory, positive feedback causes instability in the system!
The major point is that the results of evaluations of teaching should be made available to those people and groups who have been a source of evidence, if only for reasons of common courtesy. For example, there is usually no formal mechanism for notifying Departmental Reviewers or peer referees of the outcomes of their reports which are difficult to write and are time-consuming. The principle applies whether the evaluation has been formative or summative or whether it has been conducted at the departmental or individual level.
But there are reasons beyond courtesy for ensuring the evaluation feedback loop is closed.
Fears are frequently expressed that students may be over-exposed to the evaluation process and may refuse to cooperate if they are asked to complete too many questionnaires in a short period of time. This fear may be valid in the period prior to a departmental review when all courses are being evaluated and if the students get no feedback about what happened to the survey results.
The authors administered one of Australia’s largest teaching survey systems for a number of years. In 1991, there was a coordinated attack on the system by a department about to be evaluated and one of the arguments used was that students were ‘over-surveyed’. We decided to ask all students being surveyed that year what they thought. We asked them three very simple questions and gave them an opportunity to add their own comments:
To our considerable (and pleased) surprise, more than 95% of the students answered the first two questions in the affirmative and more than 92% of the third in the affirmative. The reasons for the slightly less favourable response to the last question were that students were not told the results of the survey and that they could see little improvement in teacher performance.
The major reason why students were not told of the results was that it took us too long at that time to process the questionnaires (which had been administered at the end of the academic year). On the other hand, it was perhaps rather optimistic of the students to expect instant improvement.
Thus, to avoid ‘survey burnout’ and to ensure continuing student support, it is necessary to tell the students firstly what the results were, at least in broad terms, and second, what action will be taken in response to those results.
We do not recommend the practice of ‘naming and shaming’ (that is, posting the questionnaire results on the departmental notice board, a common practice in American universities), if only because perverse academics with low scores have been known to actually take pride in them and/or see them as proof of student incompetence or malevolence.
At the individual teacher or course level, probably the best practice is for the teacher to discuss the broad thrust of the results with the class or sub-groups thereof and to set out possible actions for improvement. It should be remembered that not all student expectations can be met due to resource limitations, departmental or profession set curricula and so on. When students understand the reasons why not all improvement can be instantaneous, they remain cooperative. On the other hand, where they have made legitimate criticisms and improvements can be made in the following year, then such improvements should be put into effect.
Such discussions also have the benefit that the dialogue can continue, with students amplifying the reasons for their ratings and comments. This benefit is less likely where emails or web sites are used to convey the information to them. In general, however, we do not believe that detailed results of questionnaires should be given to students in writing because of the danger that they may be misused.
The situation becomes somewhat more complex when the evaluation is conducted at the departmental level. Even assuming that teachers have discussed the results of individual course questionnaires with their students, the processes of a departmental review can be drawn out over a considerable period culminating in resolutions being passed by the institution’s academic council. While departments and academic councils do normally have student representatives, they are notoriously lax in reporting to their constituents. Part of the answer lies in training and support for student representatives at that level, but there also needs to be in place an efficient and effective system of class representatives as an essential communications link. Departmental cooperation with the Students Union is essential.
Where teachers conduct a purely formative evaluation, a feedback loop is irrelevant because the whole process, by definition, consists of feedback from one or more sources. Universities should, however, provide a service whereby staff can seek advice about the interpretation of feedback (particularly questionnaires) and about implementation of improvements. Such a service, whether provided by a Quality Office or by an academic development unit must be completely confidential.
Feedback loops for teachers do become important with summative evaluations of teaching or courses conducted by third parties. It is most unwise to institute summative evaluations until adequate support mechanisms are in place. There is an argument that teacher support should be based firmly in the Department, either through a mentor system or by the head of department. Such support is highly desirable but it does not replace the role of an impartial and confidential central academic development service.
One of the reasons is that an experienced academic developer can probe behind the data in a way that is difficult for departmental colleagues. For example, students might criticise an aspect of an individual’s teaching which is a symptom of a problem at departmental level. To take a very simple instance, a lecturer may be perceived as talking too fast where the real problem is that the course syllabus, determined by the department, is overcrowded. The remedy is not to get the lecturer to slow down but to persuade the department to examine its curriculum.
Again, sensitive discussions with an academic developer can lead to ongoing professional development for the teacher concerned in a neutral and confidential environment which is difficult, if not impossible to achieve in the departmental setting.
Heads of department are often tempted to send staff with a ‘poor’ teaching evaluation to the academic development unit, an action which causes embarrassment to both parties. A rather more productive atmosphere is created where the unit has established its reputation for effectiveness and all its clients are volunteers