“Write a lede for an article about the implications of ChatGPT in the university,” I prompt the chatbot.
“As artificial intelligence continues to advance at a rapid pace, universities around the world are exploring new ways to harness its power. One such development is ChatGPT, a cutting-edge language model created by OpenAI,” it begins. “With the ability to converse fluidly in natural language, ChatGPT has the potential to revolutionize the way students learn and interact with information, creating exciting new opportunities for education and research.
“But as this technology becomes more prevalent on university campuses, it also raises important questions about the role of AI in higher education, and the implications it may have for the future of learning.”
An autogenerated text bot released to the public last November, ChatGPT has made its way to universities across the globe, prompting discussions about its implications in the academy.
As the chair of the University of Winnipeg’s (U of W) Senate Academic Standards and Misconduct Committee, Tracey Whalen is witnessing faculty, staff and students grapple with the bot in real time. While there’s yet to be a ChatGPT academic misconduct case at the U of W, it’s on their radar.
“Anecdotally, I know that faculty members are starting to see more auto-generated content,” Whalen says. “Faculty are starting to experiment with it themselves. They’re trying to see what it means.”
Beyond fears of academic misconduct, Whalen says the bot has prompted professors and students to ask themselves an important question: “What constitutes a respectful and correct way of acknowledging people’s work and paraphrasing information?”
It’s a question that U of W librarian Ian Fraser encounters, too.
Earlier this year, while helping a student navigate ChatGPT-generated content, Fraser uncovered a critical limitation: many of the “citations” the bot included were completely fabricated.
“There have been some instances already in the library where students have had references that were generated by ChatGPT that went to non-existent resources,” Fraser says. “It’s able to generate a citation that looks real, but then when we search the library literature more broadly ... it doesn’t actually exist at all.”
At the same time, Brenda Stoesz, a senior faculty specialist at the Centre for the Advancement of Teaching and Learning at the University of Manitoba (U of M), believes examining ChatGPT through a myopic, academic-misconduct lens ignores its positive uses.
“Just like a calculator, sometimes it’s important to do some work, when we’re learning about a concept in mathematics, on our own, without the support of tools,” Stoesz says. “Sometimes it’s a timesaver, and we can offload some of those cognitive processes to a tool.”
“A technology like ChatGPT, we won’t be able to get away from, but maybe we can learn how to use it ethically.”
While the U of W administration has yet to comment on ChatGPT, the U of M’s Academic Learning Centre published a student guide to academic integrity and artificial intelligence back in February.
With relatively small class sizes, the U of W might be at an advantage. Whalen says some instructors are already experimenting with ways around the bots.
“One instructor, for example, said she might do some more old-school evaluations,” Whalen says. “Students would write something in class, and she would have a sense of their tone, a sense of their voice, so that when they hand things in later on ... she would have a sense of how they write.”
Across the river at the comparably larger U of M, Stoesz says there are ways to help students lost in the sea of lecture halls. Libraries, tutoring centers and office hours can help make the place for AI in student learning more transparent.
Still, it’s worthwhile to ask: why might students want to use ChatGPT in the first place?
Aside from those who go on to graduate school, it’s unlikely people will need to analyze Hobbes, Marx or Nietzsche on the job – or in the “real world,” as students might say.
Plugging scholarly information into a chatbot, then, seems more efficient than reading a Shakespearian sonnet a dozen times over.
Yet, as Irina Dumitrescu argues in a recent Walrus article, the idea that students will succumb to the efficiency of AI dismisses a fundamental pillar of the university: to think critically.
“If students are judged based on how well they stick to a model, it’s understandable that they will look for the most efficient way to reach that goal,” Dumitrescu writes. “But the goal of school writing isn’t to produce goods for a market. We do not ask students to write a 10-page essay on the Peace of Westphalia because there’s a worldwide shortage of such essays. Writing is an invaluable part of how students learn.”
Fraser says most of the U of W students he encounters are eager to learn. Focusing solely on academic misconduct is misguided, he says.
“Most students are trying to produce authentic work and really learn about things,” he says. “It’s much better for us to look at how it can be integrated into the research process in a legitimate way ... that helps bolster a student’s learning as opposed to bypassing it.”
In some ways, it’s nostalgic for Fraser, who witnessed a “similar crisis” around a tool that has become commonplace.
“When I started, all the talk was centred around Wikipedia and how that tool was not an appropriate academic research tool,” Fraser says. “Students continued to use it because they saw a value in it.”
Stoesz believes it’s important that faculty and students don’t shy away from tough conversations about new technologies.
“If students are able to be very honest with the difficulties they’re having in the class and their temptations about using a tool that maybe is not permitted, and (they) approach their professor about support, that’s a really good way to build trust between students and teachers,” she says.
The bots are here to stay. But the humanity of students won’t go away.
Published in Volume 77, Number 24 of The Uniter (March 30, 2023)