Don’t Ban ChatGPT in Schools. Teach With It.
Recently, I gave a talk to a group of K-12 teachers and public school administrators in New York. The topic was artificial intelligence, and how schools would need to adapt to prepare students for a future filled with all kinds of capable A.I. tools.
But it turned out that my audience cared about only one A.I. tool: ChatGPT, the buzzy chat bot developed by OpenAI that is capable of writing cogent essays, solving science and math problems and producing working computer code.
ChatGPT is new — it was released in late November — but it has already sent many educators into a panic. Students are using it to write their assignments, passing off A.I.-generated essays and problem sets as their own. Teachers and school administrators have been scrambling to catch students using the tool to cheat, and they are fretting about the havoc ChatGPT could wreak on their lesson plans. (Some publications have declared, perhaps a bit prematurely, that ChatGPT has killed homework altogether.)
Cheating is the immediate, practical fear, along with the bot’s propensity to spit out wrong or misleading answers. But there are existential worries, too. One high school teacher told me that he used ChatGPT to evaluate a few of his students’ papers, and that the app had provided more detailed and useful feedback on them than he would have, in a tiny fraction of the time.
“Am I even necessary now?” he asked me, only half-joking.
Some schools have responded to ChatGPT by cracking down. New York City public schools, for example, recently blocked ChatGPT access on school computers and networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content.” Schools in other cities, including Seattle, have also restricted access. (Tim Robinson, a spokesman for Seattle Public Schools, told me that ChatGPT was blocked on school devices in December, “along with five other cheating tools.”)
It’s easy to understand why educators feel threatened. ChatGPT is a freakishly capable tool that landed in their midst with no warning, and it performs reasonably well across a wide variety of tasks and academic subjects. There are legitimate questions about the ethics of A.I.-generated writing, and concerns about whether the answers ChatGPT gives are accurate. (Often, they’re not.) And I’m sympathetic to teachers who feel that they have enough to worry about, without adding A.I.-generated homework to the mix.
But after talking with dozens of educators over the past few weeks, I’ve come around to the view that banning ChatGPT from the classroom is the wrong move.
More on U.S. Schools and Education
- Discrimination Complaints: The Education Department logged a record number of discrimination complaints in 2022, the latest indicator of how social and political strife is reverberating in the nation’s schools.
- Curriculum Wars: According to a new survey, most Americans are satisfied with their local schools. But one issue remains divisive: teaching children about gender diversity and L.G.B.T.Q. rights.
- Teaching Climate Change: Many middle school science standards don’t explicitly mention climate change. But some educators are finding ways to integrate it into lessons.
- Falling Scores: U.S. students in most states have experienced troubling setbacks in math and reading since the pandemic began, according to the National Assessment of Educational Progress.
Instead, I believe schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults. Here’s why.
It won’t work
The first reason not to ban ChatGPT in schools is that, to be blunt, it’s not going to work.
Sure, a school can block the ChatGPT website on school networks and school-owned devices. But students have phones, laptops and any number of other ways of accessing it outside of class. (Just for kicks, I asked ChatGPT how a student who was intent on using the app might evade a schoolwide ban. It came up with five answers, all totally plausible, including using a VPN to disguise the student’s web traffic.)
Some teachers have high hopes for tools such as GPTZero, a program built by a Princeton student that claims to be able to detect A.I.-generated writing. But these tools aren’t reliably accurate, and it’s relatively easy to fool them by changing a few words, or using a different A.I. program to paraphrase certain passages.
A.I. chat bots could be programmed to watermark their outputs in some way, so teachers would have an easier time spotting A.I.-generated text. But this, too, is a flimsy defense. Right now, ChatGPT is the only free, easy-to-use chat bot of its caliber. But there will be others, and students will soon be able to take their pick, probably including apps with no A.I. fingerprints.
Even if it were technically possible to block ChatGPT, do teachers want to spend their nights and weekends keeping up with the latest A.I. detection software? Several educators I spoke with said that while they found the idea of ChatGPT-assisted cheating annoying, policing it sounded even worse.
“I don’t want to be in an adversarial relationship with my students,” said Gina Parnaby, the chair of the English department at the Marist School, an independent school for grades seven through 12 outside Atlanta. “If our mind-set approaching this is that we have to build a better mousetrap to catch kids cheating, I just think that’s the wrong approach, because the kids are going to figure something out.”
Instead of starting an endless game of whack-a-mole against an ever-expanding army of A.I. chat bots, here’s a suggestion: For the rest of the academic year, schools should treat ChatGPT the way they treat calculators — allowing it for some assignments, but not others, and assuming that unless students are being supervised in person with their devices stashed away, they’re probably using one.
Then, over the summer, teachers can modify their lesson plans — replacing take-home exams with in-class tests or group discussions, for example — to try to keep cheaters at bay.
ChatGPT can be a teacher’s best friend
The second reason not to ban ChatGPT from the classroom is that, with the right approach, it can be an effective teaching tool.
Cherie Shields, a high school English teacher in Oregon, told me that she had recently assigned students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health: “The Story of an Hour,” by Kate Chopin, and “The Yellow Wallpaper,” by Charlotte Perkins Gilman. Once the outlines were generated, her students put their laptops away and wrote their essays longhand.
The process, she said, had not only deepened students’ understanding of the stories. It had also taught them about interacting with A.I. models, and how to coax a helpful response out of one.
“They have to understand, ‘I need this to produce an outline about X, Y and Z,’ and they have to think very carefully about it,” Ms. Shields said. “And if they don’t get the result that they want, they can always revise it.”
Creating outlines is just one of the many ways that ChatGPT could be used in class. It could write personalized lesson plans for each student ( “explain Newton’s laws of motion to a visual-spatial learner”) and generate ideas for classroom activities ( “write a script for a ‘Friends’ episode that takes place at the Constitutional Convention”). It could serve as an after-hours tutor ( “explain the Doppler effect, using language an eighth grader could understand”) or a debate sparring partner ( “convince me that animal testing should be banned”). It could be used as a starting point for in-class exercises, or a tool for English language learners to improve their basic writing skills. (The teaching blog Ditch That Textbook has a long list of possible classroom uses for ChatGPT.)
Even ChatGPT’s flaws — such as the fact that its answers to factual questions are often wrong — can become fodder for a critical thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s.
ChatGPT can also help teachers save time preparing for class. Jon Gold, an eighth grade history teacher at Moses Brown School, a pre-K through 12th grade Quaker school in Providence, R.I., said that he had experimented with using ChatGPT to generate quizzes. He fed the bot an article about Ukraine, for example, and asked it to generate 10 multiple-choice questions that could be used to test students’ understanding of the article. (Of those 10 questions, he said, six were usable.)
Ultimately, Mr. Gold said, ChatGPT wasn’t a threat to student learning as long as teachers paired it with substantive, in-class discussions.
“ “Any tool that lets students refine their thinking before they come to class, and practice their ideas, is only going to make our discussions richer,” he said.
ChatGPT teaches students about the world they’ll inhabit
Now, I’ll take off my tech columnist hat for a second, and confess that writing this piece has made me a little sad. I loved school, and it pains me, on some level, to think that instead of sharpening their skills by writing essays about “The Sun Also Rises” or straining to factor a trigonometric expression, today’s students might simply ask an A.I. chat bot to do it for them.
I also don’t believe that educators who are reflexively opposed to ChatGPT are being irrational. This type of A.I. really is (if you’ll excuse the buzzword) disruptive — to classroom routines, to longstanding pedagogical practices, and to the basic principle that the work students turn in should reflect cogitation happening inside their brains, rather than in the latent space of a machine learning model hosted on a distant supercomputer.
But the barricade has fallen. Tools like ChatGPT aren’t going anywhere; they’re only going to improve, and barring some major regulatory intervention, this particular form of machine intelligence is now a fixture of our society.
“Large language models aren’t going to get less capable in the next few years,” said Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. “We need to figure out a way to adjust to these tools, and not just ban them.”
That’s the biggest reason not to ban it from the classroom, in fact — because today’s students will graduate into a world full of generative A.I. programs. They’ll need to know their way around these tools — their strengths and weaknesses, their hallmarks and blind spots — in order to work alongside them. To be good citizens, they’ll need hands-on experience to understand how this type of A.I. works, what types of bias it contains, and how it can be misused and weaponized.
This adjustment won’t be easy. Sudden technological shifts rarely are. But who better to guide students into this strange new world than their teachers?