In the spring of spring 2020, due to the ongoing pandemic and consequent school closures, A-level students in the UK faced a novel situation of being awarded grades generated by an algorithm as administered by Office of Qualifications and Examinations Regulation (Ofqual). The inputs of the model included past work of the students, teacher’s predicted grade, past success of the school and ranking of the students within the school. After the results were published, the use of the algorithmic grading system caused a popular uproar as the method was deemed unjust. Almost 40% of students received grades lower than predicted, sparking public outcry and legal action. Ultimately, the Ofqual made a U-turn and grades were re-issued, based solely on teacher judgment.
We have tested the technology against the 24 requirements of the “Framework for Responsible AI in times of Corona”, divided into 3 categories: (i) Society; (ii) Technology; and (iii) Governance.
Each requirement is given a score from ‘0’ to ‘3’:
0 = unknown
1 = no compliance
2 = partial compliance
3 = full compliance