ChatGPT, an AI text generation program that launched in Nov. 2022, caught the attention of Wellesley students and faculty in the 2023 spring semester. Carolyn Anderson, assistant professor of computer science, briefly explained how ChatGPT works.
“ChatGPT is a text generation model trained specifically for conversational interview,” said Anderson. “[It is] trained on very large collections of internet data, with the goal of predicting the most likely next word to occur. So when you ask ChatGPT a question, it generates its response word-by-word based on how likely each word is to follow the previous one.”
ChatGPT has made its way to Wellesley in many research-oriented ways: Anderson’s lab uses ChatGPT to study the intimacies of a human and AI relationship and the possibility of using text generation to study patterns in various cultures around the world.
ChatGPT has become popular across other college campuses as well. Northeastern University student Lauren Williams said that she has seen people across her campus use it frequently for homework assignments and essay writing.
“In terms of academics, I think it’s amazing that it’s always ready. [For example], my friend is really good at organic chemistry and instead of waiting for the teacher to get back to her, she just checks ChatGPT for really difficult questions. It’s a good extra study tool,” said Williams.
Beyond academics, ChatGPT is becoming popular in social culture. Williams chuckled as she explained returning home during Northeastern’s spring break to find her father using it for fun.
“Its humor isn’t that good and it can really only do dad jokes that are kind of cheesy, but it can make jokes, which is really weird,” said Williams. “I heard that, if you give it a list of ingredients, it will make you your own recipe.”
ChatGPT also presents its fair share of drawbacks. Not only does it present a more mainstream and accessible way for students to plagiarize, but the program itself has biases and errors.
“One drawback is that ChatGPT produces text that is plausible, but not necessarily truthful. Another drawback is that it’s trained from Internet data, which contains toxic content and sociocultural biases that ChatGPT may reproduce,” Anderson said.
Jessica Wegner ’22 agreed with the concerns. In an official statement to The News, Wegner shared her thoughts.*
“Kids have it so easy these days with [the rise of] technology, especially ChatGPT. I wonder what the future of education will look like. It’s such a different world now,” said Wegner.
Both Anderson and philosophy professor Julie Walsh explained that Wikipedia presented the most similar threat to academic integrity that ChatGPT now does. The worry about Wikipedia was that students’ research skills when presented with multitudes of information collected and published in one website would be harmed, and now with ChatGPT, similar anxieties are forming. However, the program isn’t flawless at seeming human either, according to Walsh.
“Even when it gets stuff right, the writing is just very flat,” she says.
The latest version of ChatGPT, ChatGPT-4, was released on March 14. Unlike Walsh, Williams said that her conversation with the ChatGPT felt too real.
“As a kid, I had previously tried out chatbots online and they were always awful. They couldn’t speak normally. Then, I tried talking to ChatGPT and it was like talking to a person. It felt real. I was kind of concerned by that. I was impressed, but also concerned,” Williams said.
Walsh highlighted that Wellesley’s Honor Code makes it distinctive from other colleges and universities, like Northeastern. While plagiarism detectors inhibit students’ abilities to use ChatGPT and other text generation programs to do their work for them, Wellesley professors infrequently use those programs.
“[The future] will be a battle between ChatGPT and ChatGPT identifying softwares,” Wegner said.
While the rise of text generation models and other programs can make cheating more accessible, Walsh said she views the erosion of trust not only between professors and students, but also among students themselves, as the bigger problem.
“If the worry is [that] this will facilitate cheating, then I think our conversation has to be about why students are cheating, not what tools are available for them to do it,” she said.
Williams expressed a similar sentiment.
“When I think about people using ChatGPT to write essays, I get a horrible feeling in my heart. [By] saying ‘Oh I have an essay, I am not going to sit down and think about [it], I’m just going to use this robot to write it for me,’ I feel like you’re losing a lot of the critical thinking skills and the ability to reflect that you would get. If you can speak, you can write,” said Williams.
When it comes to programs like ChatGPT, ethical questions arise about the basis of the algorithm and its potential problems.
“I’m sure text generation models will continue to improve. But I think there are still important unanswered questions about our ability to control their biases, identify misinformation and prevent harmful uses,” Anderson says.
Even now, humanists have been studying the ethical effects of text generation – Anderson is excited about the possibility of using text generation to study patterns in various cultures around the world.
“Since they are trained on large collections of our cultural artifacts, these models can reflect back to us the patterns and biases present in our cultures,” Anderson said.
The overlap between computer scientists and humanists is made clear through the ethical discussions that followed the popularization of AI art generators like DALL-E in the summer of 2022 – discussions which are being renewed as ChatGPT becomes popular.
Walsh concludes, “So that’s why we need more humanists in tech.”
Williams’s stance concerning ChatGPT is symbolic of other college students.
“I don’t think you can be for or against [ChatGPT],” said Williams. “It’s just going to happen, so I guess it’s here to stay.”