12. Step Nine: Evaluate and Innovate

12.4. How to Evaluate Factors Contributing to or Inhibiting Learning

There is a range of resources you can draw on to do this, much more in fact than for evaluating traditional face-to-face courses, because online learning leaves a traceable digital trail of evidence:

  • Student grades
  • Individual student participation rates in online activities, such as self-assessment questions, discussion forums, podcasts
  • Qualitative analysis of the discussion forums, for instance the quality and range of comments, indicating the level or depth of engagement or thinking
  • Student e-portfolios, assignments and exam answers
  • Student questionnaires
  • Focus groups

However, before starting, it is useful to draw up a list of questions as in the previous section, and then look at which sources are most likely to provide answers to those questions.

Analysis of a sample of exam answers will often provide information about course structure and the presentation of materials

At the end of a course, I tend to look at the student grades, and identify which students did well and which struggled. This depends of course on the number of students in a class. In a large class I might sample by grades. I then go back to the beginning of the course and track their online participation as far as possible (learning analytics make this much easier, although it can also be done manually if a learning management system is used). I find that some factors are student specific (e.g. a gregarious student who communicates with everyone) and some are course factor specific, for example, related to learning goals or the way I have explained or presented content. This qualitative approach will often suggest changes to the content or the way I interacted with students for the next version of the course. I may for instance determine next time to manage more carefully students who ‘hog’ the conversation.

Many institutions have a ‘standard’ student reporting system at the end of each course. These are often useless for the purposes of evaluating courses with an online component. The questions asked need to be adapted to the mode of delivery. However, because such questionnaires are used for cross course comparisons, the people who manage such evaluation forms are often reluctant to have a different version for online teaching. Secondly, because these questionnaires are usually voluntarily completed by students after the course has ended, completion rates are often notoriously low (less than 20 per cent). Low response rates are usually worthless or at best highly misleading. Students who have dropped out of the course won’t even get the questionnaire in most cases. Low response rates tend to be heavily biased towards successful students. It is the students who struggled or dropped out that you need to hear from.

I find small focus groups work better than student questionnaires, and for this I prefer either face-to-face or synchronous tools such as Blackboard Collaborate. I will deliberately approach 7-8 specific students covering the full range of achievement, from drop-out to A, and conduct a one hour discussion around specific questions about the course. If one selected student does not want to participate, I try to find another in the same category. If you can find the time, two or three such focus groups will provide more reliable feedback than just one.