ONLINE PROCTORING

ONLINE PROCTORING

Scope of Use
  • A 2020 poll found that 54% educational institutions now use online proctoring and 23% are considering use of it.
  • Increasing amount of online proctoring software contain AI and machine learning components that analyse exam recordings to identify suspicious examinee behaviours or suspicious items in their immediate physical or digital environment.
  • Educational institutions use AI components to different degrees and in various ways making it hard to estimate usage of AI-based proctoring.
Technological robustness and efficacy
  • No extensive research has been done confirming that with AI-based proctoring is more effective than non-AI-based proctoring at detecting fraud
  • Cases of inaccurate detection of fraud have occurred
  • Not clear whether AI-based proctoring would be generalisable to real life situations with different circumstances than the training environment
Impact on citizens and society (blue=pos, red=neg)
  • While it is argued that online proctoring ensures academic integrity and honesty in online examinations, AI-driven versions of online proctoring are a highly intrusive form of surveillance.
  • It might set an undesired precedent for the future (more acceptance of) surveillance.
  • It deteriorates culture of mutual trust between students and teachers that prevails in educational institutions.
Governance and Accountability
  • AI-based proctoring shows elements of non-compliance with existing laws such as the GDPR, where consent cannot be considered freely given for example.
  • AI-based proctoring has a negative effect on a student’s human/fundamental right to privacy (including physical and mental integrity and autonomy).
  • What is considered “suspicious behaviour” by the AI-driven proctoring system is unclear and usually not accounted for.
  • It is unknown whether student’s data is used for other purposes (training the system).
  • In a number of use-cases student bodies did not seem to be involved in the decision to apply the system.
Acceptable trade-offs in times of crisis

Online proctoring is justified based on preserving academic integrity and honesty amid the pandemic. However, it is not clear to which extent AI-based online proctoring is better at detecting fraud than non-AI-based online proctoring. AI-based proctoring has a significant trade-off on student’s privacy, autonomy, and places student’s personal data into a vulnerable position. It is also not clear to which extent access to student’s personal files and web-page visits or monitoring student’s eye-movement is necessary for detecting fraud in examinations.

AI PROCTORING TO FLAG SUSPICIOUS BEHAVIOUR

Many universities have adapted an AI-based online proctoring (OP) that supposedly recognises ‘suspicious behaviour’ and reports instances of potential fraud during online examinations. The aim of OP is to preserve academic integrity and honesty in online exams. However, use of OP has faced objections from student-representative bodies who deem OP to infringe privacy and autonomy of students by exposing them to extensive surveillance. Many OP systems detect fraud by what it believes to be “excessive looking away from the screen”, open webpages, on-screen activity and open personal files during exams. Furthermore, students are required to “360 scan” their room and keep their web-camera on during the exam, exposing the private space of the students at home.

We have tested the technology against the 24 requirements of the “Framework for Responsible AI in times of Corona”, divided into 3 categories: (i) Society; (ii) Technology; and (iii) Governance.

Each requirement is given a score from ‘0’ to ‘3’:

0 = unknown

1 = no compliance

2 = partial compliance

3 = full compliance

USE CASE

Society

  • The impact on students have a connection to broader technological and social changes and trajectories. These trajectories may include increased surveillance, a constriction of the private domain, and the spread of undesired (or unscientific) AI-based decision making
  • Some students and university staff feel that OP could damage a university’s “culture of trust”
  • OP may normalise usage of AI-based decision-making in educational institutions in ways that some find intrusive or in other way problematic, setting undesired precedent for the future
  • Access to personal digital files and private space such as student room creates an impact on human right to a private life
  • Constant tracking of eye movement, head movement, sound, especially in a stressful situation of exam-taking could create an impact on human right to mental integrity
  • Risk of increasing and generalizing surveillance
  • Risk of habituation and trivialization of intrusive technologies
  • In many instances no consultation of student bodies took place
  • Compliance with GDPR disputable
  • Compliance with internal university guidelines and regulations unknown
  • AI-system “detects” potential suspicious behaviour and provides the instructor with “a red flag”, but instructor decides on action
  • Instructors may choose which settings they will and won’t use — for instance, they might choose to disable or ignore AI algorithms that track eye movements.
  • Instructors may have unwarranted faith in the red flags, such as the automated flagging of “suspicious” head movements, especially if we consider the conscious and unconscious inclination for some people to over-trust AI [Dreyfus et al., 2000]
  • Numerous instances of the exam crashing because of online proctoring, resulting in students needing to start exam from beginning or answering some questions twice
  • Facial recognition software has been associated with bias in the (mis)recognition of certain racial groups and genders
  • Accuracy of most systems is unknown, which could result in unfairness when systems has many false positives
  • False flagging of normal behaviour of students with disabilities and/or neuro-atypical features
  • To avoid this potentially wrongful treatment, such student may be forced to disclose personal idiosyncrasies or impairments, compromising their privacy.

Technology

  • There is a clear definition of the problem AI-driven OP tries to address
  • Alternative, less invasive solutions exist, i.e. where facial recognition, facial or gaze detection or sound detection and ‘analysis’ is avoided.
  • It is generally unknown whether intrusive AI based proctoring systems are more effective
  • AI-based proctoring can increase the likelihood of detecting suspicious behaviour that would be otherwise unnoticed
  • AI-based proctoring can both mitigate as well as increase human bias and error
  • Increasing amount of material and tips are available on how to bluff proctoring algorithms
  • Unknown whether potential adverse effects were identified
  • There are generally insufficient guarantees against possible inadvertent capture of personal or sensitive information
  • Risk of exposure of student’s personal data to a possible cyber-attack on educational institution
  • Levels of accuracy are generally unknown
  • Unable to determine rate of false positives and false negatives
  • Training process of systems is unknown, so it is also unknown how the systems determines ‘suspicious behaviour’ and if such behaviour can be considered generally suspicious
  • Unknown if the systems works satisfactory in different conditions such as poor or bright lighting, different webcam qualities, different skin tones, etc.
  • Instructors can in theory decide to stop using the system at any moment
  • Instructors make the ultimate decision to act on a ‘flag’

Governance

  • The legality of OP is generally based on the GDPR but there does not exist OP-specific legislation
  • There is ample impact on the human right to privacy
  • Less intrusive non-AI-based proctoring systems exist
  • 360° scan of the room, access to webpages visited and personal files and constant monitoring of eye and/or head movement can be deemed as unnecessary and unproportionate to achieve the desired result
  • Unclear whether student’s data is not used for other purposes
  • Unclear whether systems will be used outside of exam setting (for example to monitor student participation during classes)
  • Unknown whether the public was made aware of the use of the application
  • No voluntary submission to the application other than student not taking the exam
  • No indication that these types of OP will be abandoned after the crisis
  • Generally student bodies are not consulted during or part of the decision making proces
  • No public information on whether the use of the system was documented for accountability purposes
  • Usually no possibility for students to challenge the outcome or ‘flagging’ of an OP system

Assessment preparation: Christofer Talvitie