To Educate Students about AI, Make Them Use It

A college professor and his students explain what they learned from bringing ChatGPT into the classroom

Professor and female student speaking in front of class of students using computers

ChatGPT’s ability to produce humanlike text on command has caused a crisis in higher education. Teachers and professors have been left bewildered, wondering what to do about a technology that could enable any student to fabricate assignments without actually learning. Although there is an understandable temptation to simply ban it from the classroom, I (C.W. Howell) took an alternative approach in my religious studies classes at Elon University.

I decided, instead, to have the students engage with ChatGPT directly. I chose to do this for two reasons. First, it would be difficult if not impossible to actually forbid it; students were going to use the text-generating AI no matter what. Second, unfortunately, even the students who tried to use it responsibly (that is, without just cheating wholesale) did not really understand the technology. A common and critical error is that many students mistakenly believe it is an infallible search engine. One student tried to use ChatGPT as a research tool and, unaware that it could confabulate fake sources and quotes, incorporated fraudulent information into an otherwise innocent paper. My goal was to prevent this type of misstep by teaching students about the flaws in models like ChatGPT.

To do so, I created an AI-powered class assignment. Each student was required to generate their own essay from ChatGPT and “grade” it according to my instructions. Students were asked to leave comments on the document, as though they were a professor assessing a student’s work. Then they answered questions I provided: Did ChatGPT confabulate any sources? If so, how did you find out? Did it use any sources correctly? Did it get any real sources wrong? Was its argument persuasive or shallow?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The results were eye-opening: Every one of the 63 essays contained confabulations and errors. Most students were surprised by this, and many were less impressed by the technology than they had been before doing the homework. I hope that other professors and teachers might benefit from incorporating assignments like this into their curricula as well.

In addition to teaching AI literacy and the responsible use of ChatGPT, this assignment also stimulated exciting and deeply insightful reactions from the students—about the use of AI in class, the purpose of essay-writing, and being human in an age of machines. I asked two of them, Cal Baker and Fayrah Stylianopoulos, to share their perspectives and insight on AI in education.

Cal Baker, sophomore:

The most crucial element of schoolwork is not the course material or grade: The actual thinking processes a student undergoes while working through an assignment are more important than simply turning in the completed task. The details in the work seldom matter as much as this thinking. If students use ChatGPT to do assignments for them, I worry that they will miss out on these cognitive experiences.

In most cases, the material itself is seldom why a school assignment was given in the first place; rather, it is what occurs in a student’s brain as they complete the assignment that is the backbone of schooling. Doing a math worksheet, synthesizing sources or writing a poem are examples of assignments that improve a student’s brain. As a student works, their neurons form new connections, allowing them to work more quickly and easily the next time around, as well as increasing their capacity for further learning and productivity.

Completing assignments with an AI like ChatGPT could harm a student’s cognitive development. A 2018 European Union policy report on the potential impacts of AI on education explains that a student’s brain is in a “critical phase” of development. It further warns of “quite fundamental consequences” if young brains learn to rely on artificial cognitive technologies while in their critical development phases. In other words, if they don’t put their own effort into schoolwork, students might miss out on developing the brain structures needed to solve problems for themselves. A more recent Frontiers in Artificial Intelligence paper, from 2022, reached a similar conclusion: the authors speculate that while “cognitive offloading” tasks to AI “can improve immediate task performance, it might also be accompanied by detrimental long-term effects.” These effects might include diminished problem-solving abilities, worse memory and even a decrease in one’s ability to learn new things.

On the surface, the more an individual practices something, the better at it they are likely to become. But on a deeper level, the processes that go on in a student’s brain as they undertake these assignments are the most important part. If a student turns to AI instead of doing the work themself, the neural pathways they would use for that assignment will deteriorate instead of being formed and retraced. This will ultimately end up hurting students. If they depend on technology that makes their lives easier in the short term, they will fail to develop their abilities for future work, thereby making their lives more difficult in the long term.

Fayrah Stylianopoulos, sophomore:

Although ChatGPT is certainly dangerous if abused, I recognize that it has the potential to support students on their academic journeys. At its best, ChatGPT can be a versatile resource, introducing fresh, interactive ideas into the classroom for both teachers and students to enjoy. For instance, it can suggest unique learning experiences based on standardized objectives, drafting lesson plans and prompts for student assignments. ChatGPT can even quiz students on their own class notes (in short answer or multiple-choice format no less), although it is worth noting that students might be better served cognitively by writing their own questions and recall cues.

However, the ubiquity of AI in academic spaces compels students to reflect on who they are, and on what ChatGPT is.

AI-generated text can sound right, but sequential plausibility is not the same thing as truth. Grading ChatGPT’s essay for this assignment made it apparent that students, for this reason and others, are much smarter than large language models like ChatGPT. Unfortunately, few realize this. Many students feel insignificant or unintelligent when faced with such technology. We need to affirm students and instill in them the confidence to realize that their perspectives matter, and their critical thinking cannot be automated.

Some critics have likened large language models like GPT-3 to trained parrots that repeat familiar phrases without an inkling of what their subtle contexts could mean to human listeners. If this passionless and detached precedent of simply “sounding right” is rewarded in classrooms, it will have a tragically homogenizing effect on human thinking and dialogue. I believe there is something to be said for the essential, profound stake we share in the fate of this world, which is something humans (and parrots too) have, but that ChatGPT does not. Despite all its incredible ability, ChatGPT has no sense of relationship to us or to the world. How can such a detached voice have anything to offer us that we do not already possess? 

I worry that if students over-rely on machine learning technology, they will learn to think like it, and focus on predicting the most likely “right answer” instead of thinking critically and seeking to comprehend nuanced ideas. Science fiction often depicts artificial intelligence taking over society, leading to a technological singularity (where technology irrevocably surpasses humanity) and the end of the world. But I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.