Wednesday 21 January 2015

Robo-readers: towards automated #MOOC grading

Up until six years ago, the ideal eLearning group would be around 25 learners. This enabled community building, spaced and easily to follow interactions, getting feedback from the online tutor/facilitator, and timely graded assignments. As soon as the (x)MOOC were coming, the sheer size of online learner groups forced online tutors and Technology Enhanced Learning (TEL) developers to rethink assessments. In all fairness, assessments come from (or in some cases are relics from) the industrial age. As the industrial age demanded that learners would grasp specific production oriented processes or specific information that would be used to build upon. But now, with knowledge shifts happening all over the professional spectrum, new TEL-solutions seem to be needed. 

This post is one in a set of posts that I am planning on TELearning solutions that affect all of us in face-to-face, blended, or/and online education. Mostly for reflective purposes, but also to see where 'our' status as online teacher (or learner) might be going.

The rise of the robo-reader or robo-grader
One of the current solutions are the robo-readers. robo-readers are algorithms that enable automated grading of assessments and/or assignments. The learner posts an assignment, and an algorithm looks at the assignment and sends a grade with additional feedback to the learner. 

It has the potential for big returns, big money. This means huge MOOC platforms are investing in these types of solutions like the robo-readers. 

There are of course multiple robo-grade options, as assignments or assessments have multiple layers of complexity. The picture in this blogpost is taken from the overall page on automated grading from the Technical University of Darmstadt, Germany

Some research, some critiques
One of my fellow students is Duygu Simsek, who is investigating "to what degree can computational text analysis and visual analytics be used to support the academic writing of students in higher education?". And she is also quite impressed by the increased capacity of these types of algorithms. She got me thinking about robo-readers. At first I was enchanted, but than it became to dawn on me, that teachers are in the next wave of professionals threatened by automation. Which makes me wonder, can teachers be replaced. Well, some teacher activity can be automated, if the results are qualitatively high. 

In 2012 a study by the University of Akron, looking at the learner preparedness for robo-readers (using 22.000 short essays) concluded that learners rather have their assignments screened by robo-readers, as a means to improve their final assignments. And looking at the history of such research, it is becoming clear that the results from the robo-readers compared to teacher grading are getting better, luckily not perfect ... yet. But, like with many other studies it was scrutinized and a critique was written based on some questionmarks that could be made when investigating the data of the Akron study

Piotr Mitros, chief scientist for EdX is a believer, and why would not he? As he has successfully pioneered in various technologies, many of which optimize the learning process. And as it goes, a movement on keeping high-stake grading into human hands was organized (with accompanying online signature gathering, called Professionals against machine scoring of student essays in high-stakes assessment), with Les Perelman as its biggest driver. 

Risks of robo-grading/robo-reading
One of the biggest challenges seems to be the standardization of language use in assignments. Indeed, if an algorithm is set up, it complies with certain boundaries. This means that 'only' one specified set (however diverse) will lead to a good feedback. Dave Perrin talks about this challenge in a clear way in a 2013 paper, and he adds his perspective (20 years of a writing teacher) on robo-grading. And indeed I agree with him that guidance, individualised support in becoming more able as a learner is one of the many teacher strengths that are not captured in an algorithm. For there is a risk that comes with robo-grading that is not even related to the actual software, but to the expectations as perceived by a teacher. (parallel with SAT scores, and how teachers - based on the books provided by large educational publishing companies - pushes those teachers to drill students to use certain, specific answers to questions, that are not always the only correct answers that can be given). There is a great paper on this perversion written by Meredith Broussard (2014) which clearly describes the perversity of standardised testing, educational books, and teacher options. 

The surplus for the learner
At present I am not sure whether I like the rise of the robo-reader. But whether I like it or not, it is still coming. So what is the benefit for the learner (still the main goal of any teacher)? Looking at the responses from a joint study performed in 2010 by Khaled El Ebyary and Scott Windeatt, showed remarkable similarities with learners and their view on plagiarism tools: learners like the fact that they can get some basic feedback on their assignments from computers, without loosing face in front of (or in the minds of) a teacher.

Conclusion? Just reflecting on teachers automation, and societal goals
At this point in time I do not have a conclusion on what I think about robo-grading.  There is big money in such solutions, as it will cut down on teacher costs (time effort, human capacity). Do I like this? Not as long as our educational and professional system is not turned around, and shifts from professional morals, towards personally, meaningful goals in life. It is my belief that as long as education is fixed on jobs, and society only adds a stamp of 'good citizen' to those having a job, any movement towards job loss (e.g. through automation), will result in an unbalanced society where most citizens no longer have a feeling of being needed, of having worth. 

No comments:

Post a Comment