4. Emerging Technologies: Artificial Intelligence
4.5. Strengths and Weaknesses
There are several ways to assess the value of the teaching and learning affordances of particular applications of AI in teaching and learning:
- Is the application based on the three core features of ‘modern’ AI: massive data sets, massive computing power; powerful and relevant algorithms?
- Does the application have clear benefits in terms of affordances over other media, and particularly general computing applications?
- Does the application facilitate the development of the skills and knowledge needed in a digital age?
- Is there unintended bias built into the algorithms? Does it appear to discriminate against certain categories of people?
- Is the application ethical in terms of student and teacher/instructor privacy and their rights in an open and democratic society?
- Are the results of the application ‘explainable’? For example, can a teacher or instructor or those responsible for the application understand and explain to students how the results or decisions made by the AI application were reached?
These issues are addressed below.
Is it Really a 'Modern' AI Application in Teaching and Learning?
Looking at the Zawacki-Richter et al. study and many other research papers published in peer-reviewed journals, very few so-called AI applications in teaching and learning meet the criteria of massive data, massive computing power and powerful and relevant algorithms. Much of the intelligent tutoring within conventional education is what might be termed ‘old’ AI: there is not a lot of processing going on, and the data points are relatively small. Many so-called AI papers focused on intelligent tutoring and adaptive learning are really just general computing applications.
Indeed, so-called intelligent tutoring systems, automated multiple-choice test marking, and automated feedback on such tests have been around since the early 1980s. The closest to modern AI applications appear to be automated essay grading of standardized tests administered across an entire education system. However, there are major problems with the latter. More development is clearly needed to make automated essay grading a valid exercise.
The main advantage that Klutka et al. (2018) identify for AI is that it opens up the possibility for higher education services to become scalable at an unprecedented rate, both inside and outside the classroom. However, it is difficult to see how ‘modern’ AI could be used within the current education system, where class sizes or even whole academic departments, and hence data points, are relatively small, in terms of the numbers needed for ‘modern’ AI. It cannot be said to date that modern AI has been tried, and failed, in teaching and learning; it’s not really even been tried.
Applications outside the current formal system are more realistic, for MOOCs, for instance, or for corporate training on an international scale, or for distance teaching universities with very large numbers of students. The requirement for massive data does suggest that the whole education system could be massively disrupted if the necessary scale could be reached by offering modern AI-based education outside the existing education systems, for instance by large Internet corporations that could tap their massive markets of consumers.
However, there is still a long way to go before AI makes that feasible. This is not to say that there could not be such applications of modern AI in the future, but at the moment, in the words of the old English bobby, ‘Move along, now, there’s nothing to see here.’
However, for the sake of argument, let’s assume that the definition of AI offered here is too strict and that most of the applications discussed in this section are examples of AI. How do these applications of AI meet the other criteria above?
Do the Applications Facilitate the Development of the Skills and Knowledge Needed in a Digital Age?
This does not seem to be the case in most so-called AI applications for teaching and learning today. They are heavily focused on content presentation and testing for understanding and comprehension. In particular, Zawacki-Richter et al. make the point that most AI developments for teaching and learning – or at least the research papers – are by computer scientists, not educators. Since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course, it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviorist model of learning: present/test/feedback. Lynch (2017) argues that:
If AI is going to benefit education, it will require strengthening the connection between AI developers and experts in the learning sciences. Otherwise, AI will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning.
Comprehension and understanding is indeed important foundational skills, but AI so far is not helping with the development of higher-order skills in learners of critical thinking, problem-solving, creativity and knowledge-management. Indeed, Klutka et al. (2018) claim that that AI can handle many of the routine functions currently done by instructors and administrators, freeing them up to solve more complex problems and connect with students on deeper levels. This reinforces the view that the role of the instructor or teacher needs to move away from the content presentation, content management, and testing of content comprehension – all of which can be done by computing – towards skills development. The good news is that AI used in this way supports teachers and instructors but does not replace them. The bad news is that many teachers and instructors will need to change the way they teach, or they will become redundant.
Is There Unintended Bias in the Algorithms?
It could be argued that all AI does is to encapsulate the existing biases in the system. The problem though is that this bias is often hard to detect in any specific algorithm, and that AI tends to scale up or magnify such biases. These are issues more for institutional uses of AI, but machine-based bias can discriminate against students also in a teaching and learning context as well, and especially in the automated assessment.
Is the Application Ethical?
There are many potential ethical issues arising from the use of AI in teaching and learning, mainly due to the lack of transparency in the AI software, and particularly the assumptions embedded in the algorithms. The literature review by Zawacki-Richter et al. (2019) concluded:
…A stunning result of this review is the dramatic lack of critical reflection of the pedagogical and ethical implications as well as risks of implementing AI applications in higher education.
What data are being collected, who owns or controls it, how is it being interpreted, how will it be used? Policies will need to be put in place to protect students and teachers/instructors (see for instance the U.S. Department of Education’s student data policies for schools), and students and teachers/instructors need to be involved in such policy development.
Are the Results Explainable?
The biggest problem with AI generally, and in teaching and learning in particular, is the lack of transparency. How did it give me this grade? Why I am directed to this reading rather than that one? Why isn’t my answer acceptable? Lynch (2017) argues that most data collected about student learning is indirect, inauthentic, lacking demonstrable reliability or validity, and reflecting unrealistic time horizons to demonstrate learning.
‘Current examples of AIEd often rely on …. poor proxies for learning, using data that is easily collectable rather than educationally meaningful.’