### 5. Strengths and Weaknesses of MOOCs

#### 5.5. Assessment

Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. This section draws heavily on Suen’s paper.

Computer Marked Assessments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually, participants are given immediate automated feedback on their answers, ranging from the simple right or wrong answers to more complex responses depending on the type of response checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple-choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer Assessment

Another type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the process of students evaluating the work of others, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing other participants.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of another participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization (Piech at al. 2013), but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated Essay Scoring

This is another area where there have been attempts to automate scoring (Balfour, 2013). Although such methods are increasingly sophisticated, they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling, and sentence construction. Once again, they do not measure accurately longer essays where higher-level intellectual skills are demanded.

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. However, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

MOOC-based microcredentials are a more recent development. A microcredential is any one of a number of new certifications that cover more than a single course but are less than a full degree. Pickard (2018) provides an analysis of more than 450 MOOC-based microcredentials. Pickard states:

Microcredentials can be seen as part of a trend toward modularity and stackability in higher education, the idea being that each little piece of education can be consumed on its own or can be aggregated with other pieces up to something larger. Each course is made of units, each unit is made of lessons; courses can stack up to Specializations or XSeries; these can stack up to partial degrees such as MicroMasters, or all the way up to full degrees (though only some microcredentials are structured as pieces of degrees).

However, in her analysis, Pickard found that in the micro-credentials offered through the main MOOC platforms, such as Coursera, edX, Udacity and FutureLearn.

• Student fees range from US$250 to US$17,000.
• Some microcredentials, though not all, offer some opportunity to earn credit towards a degree program. Typically, university credit is awarded if and only if a student goes on to enroll in the particular degree program connected with the microcredential.
• They are not accredited, recognized, or evaluated by third party organizations (except insofar as they pertain to university degree programs). This variability and lack of standardization poses a problem for both learners and employers, as it makes it difficult to compare the various microcredentials.
• With so much variability, how would a prospective learner choose among the various options? Furthermore, without a detailed understanding of these options, how would an employer interpret or compare these microcredentials when they come upon a resume?

Nevertheless, in a digital age, both workers and employers will increasingly look for ways to ‘accredit’ smaller units of learning than a degree, but in ways that they can be stacked towards eventually a full degree. The issue is whether tying this to the MOOC movement is the best way to go.

Surely a better way would be to develop microcredentials as part of or in parallel with a regular online masters program. For instance as early as 2003, the University of British Columbia in its online Master of Educational Technology was allowing students to take single courses at a time, or the five foundation courses for a post-graduate certificate, or add four more courses and a project to the certificate for a full Master’s degree. Such microcredentials would not be MOOCs, unless (a) they are open to anyone and (b) they are free or at such a low cost anyone can take them. Then the issue becomes whether the institution will accept such MOOC-like credentials as part of a full degree. If not, employers are unlikely to recognize such microcredentials, because they will not know what they are worth.

The Intent Behind Assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. There are many different purposes behind the assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment or feedback, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (for example, entrance to graduate school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning.