To ensure continuation of education and have as many students graduate as possible during the pandemic, many universities have adopted online proctoring (OP). There exist different types of OP, ranging from little to very invasive ones. AI-driven OP systems usually claim to be able to flag “suspicious behaviour” and report instances of potential fraud during online examination, based on a variety of factors such as eye tracking, facial recognition or detection, gaze detection (to determine excessive looking away from the screen), key strokes, corrections, open webpages, room sounds etc. These systems generally also have access to student’s personal files during the exam. Furthermore, students are required to “360 scan” their room and keep their web-camera on during the exam.
We have tested the technology against the 24 requirements of the “Framework for Responsible AI in times of Corona“, divided into 3 categories: (i) Society; (ii) Technology; and (iii) Governance.
Each requirement is given a score from ‘0’ to ‘3’:
0 = unknown
1 = no compliance
2 = partial compliance
3 = full compliance