Professor fails students after ChatGPT falsely accuses them of cheating

ChatGPT stock photo 11

Edgar Cervantes / Android Authority

TL;DR

  • A university professor has reportedly punished students for using AI to write their papers.
  • However, the professor used ChatGPT to determine whether these papers were AI-generated.

It’s not unheard of for students to cheat on their homework by using generative AI platforms such as ChatGPT. However, the likes of ChatGPT aren’t good at detecting AI-generated works. That hasn’t stopped a university professor from reportedly taking this approach, with disastrous results.

A Texas A&M University professor apparently punished 15 students for using ChatGPT to generate their papers, according to a Reddit post (h/t: PCMag). The affected students were given an incomplete grade on their papers and subsequently had their diplomas put on temporary hold pending an investigation.

However, it turns out that the professor simply fed the papers into ChatGPT itself in a bid to determine whether they were AI-generated works.

Erroneous use of ChatGPT

“In grading your last three assignments I have opened my own account for Chat GTP (sic). I copy and paste your responses in this account and Chat GTP will tell me if the program generated the content,” read an excerpt of the professor’s letter, according to a screenshot taken by Redditor DearKick, who is a fiance of an affected student.

“I put everyone’s last three assignments through two separate times and if they were both claimed by Chat GTP you received a 0.”

Aside from the fact that the professor failed to spell ChatGPT correctly, the platform simply isn’t capable of accurately determining whether works are generated by AI. There are services that specialize in detecting AI works with varying degrees of success (e.g. Turn It In and GPTZero), but the instructor evidently didn’t use them.

Lending credence to ChatGPT’s questionable verification ability was the fact that the chatbot claimed the instructor’s 2021 research paper was itself written by AI. Needless to say, AI platforms like ChatGPT, Bing Chat, and Google Bard are prone to misinformation and so-called hallucinations at times.