Wednesday, January 15, 2025

Cheating the Benefits of Cheating with AI


With the popularity and extraordinary, and dubious, claims of the ability of AI (Narayanan & Kapoor, 2024), a heightened concern about students cheating has occurred. Naturally, proctoring and anti-plagiarism companies benefit from this fear and the sales of their ‘solutions’ increase.  The problem with this policing solution is that beyond a point, teachers get into an ‘arms race’ between AI and AI-detection software where the real loser is the institution footing the bill.  One might think that the companies are happy to ‘stir the pot’ to increase anxiety for the benefit of their shareholders. Meanwhile, research has indicated that the level of academic dishonestly has not changed with the prevalence of AI (Lee, et al, 2024) Yet the focus on student policing places instructors in an adversarial relationship with the students, instead of an instructive one. 

 

A Better Solution

 

Low-stakes assignments have the added benefit of deterring academic dishonesty.  Because there is less risk involved in the assignment, there is less incentive to cheat, which could bring about sever consequences.  Why take the risk, when there is so little to gain?

 

Low-stakes assignments, such as threaded assignments, thwart using AI that quickly generates content.  The low-stakes assignments act as scaffold that requires further reflection and meta-cognitive skills that are not easily replicated by the stochastic language models modern AI uses. This will force the would-be culprit to reflect and expend so much energy producing something to submit that it isn’t worth the effort to get whatever the AI model can produce.

 

When assignments:

  •  offer less stress, 
  • supply tools for the students to succeed, 
  • clearly express expectations, and 
  • encourage the student to take control of their learning, 
the students begin to see the value in what they are learning.  This dissuades the learner from cutting corners instead of easily going through the process.  Not only will this lower academic dishonesty in your classes, low stakes assignments will encourage more of your students to become engaged active learners.

 

Low-stakes assignments engage the students in the learning process.  The best way to eliminate academic dishonestly is to remove the incentive to cheat and be up-front about the rules.  Students are often ‘cheating’ because often they have not received adequate instruction and expectations (Waltzer, Bareket-Shavit & Dahl, 2023).  To solve this, explicitly state the acceptable level of AI usage and you will curb the level of unintended violations to your academic expectations. A group of scaffolded low stakes assignments, often scaffolded to create a large assignment, then undermines the pay-off from cheating.

References

Futterman, K (2024) Zeitgeist 6.0: Results of the Campus sixth-annual student-body survey. The Middlebury Campus. Dec 13.

Lee, V., Pope, D., Miles, S, and R. Zarate (2024) Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Computers and Education: Artificial Intelligence. Vol 7, December.

Losey, R. (2024) Stochastic Parrots: How to tell if something was written by an AI or a human? 

Mintz, S (2023) 10 Ways to Prevent Cheating. Inside Higher Ed. February 16.

Mollick, E (2023) Centaurs and Cyborgs on the Jagged Frontier: I think we have an answer on whether AIs will reshape work. One Useful Thing. Sep. 16.

Narayanan, A & S. Kapoor (2024) AI Snake Oil: What artificial Intelligence can do, what it can’t, and how to tell the difference. Princeton University Press: Princeton, NJ.

Waltzer, T., Bareket-Shavit, C., & Dahl, A. (2023). Teaching the What, Why, and How of Academic Integrity: Naturalistic Evidence from College Classrooms. Journal of College and Character, 24(3), 261–284.

Wehlburg, K (2021) Assessment design that supports authentic learning (and discourages cheating) Times Higher Education. Nov 24


Cheating the Benefits of Cheating with AI

With the popularity and extraordinary, and dubious, claims of the ability of AI (Narayanan & Kapoor, 2024), a heightened concern about s...