The use of artificial intelligence has become increasingly common in academic settings – as professors and institutions grapple with how to approach AI, Wellesley’s departments have varying levels of standardized rules regarding academic integrity and the use of artificially intelligent tools.
Should institutions ban the use of artificial intelligence outright, or learn to use it to their advantage? Proponents of AI bans cite AI’s environmental and ethical issues, but proponents of its regulated use argue that the development of artificial intelligence is inevitable and it may be more beneficial to learn how to work with it.
While Wellesley College provides resources related to AI use, there is no readily available college-wide policy on the use of AI for academic work. The Wellesley College Library Research Guide provides instructions on how to cite information, text, ideas and data that has been generated or manipulated by artificial intelligence. It also links statements of various publishers on their AI policy and gives information on how copyright applies to generative AI. Policies often vary from department to department or even class to class as a result.
The Wellesley computer science department offers multiple courses related to AI, including “CS/MAS 110: Computing in the Age of AI,” which examines the processes of AI and its impact on society, and “CS 232: Artificial Intelligence,” which investigates the history of artificial intelligence and how it should be treated and applied today. However, none of these courses are explicitly mentioned as requirements for the computer science major or minor.
In the “NEUR 125Y: Brains, Minds, and Machines” course, a first-year seminar taught by Associate Professor of Neuroscience Michael Wiest, students are permitted to use freely available artificial intelligence platforms to generate ideas and hints and to clarify concepts. However, AI cannot be submitted as one’s own work, and paid platforms are prohibited for equity reasons since not everyone is able to pay equally. “NEUR 200: Neurons, Networks and Behavior” and “NEUR 335: Computational Neuroscience,” laboratory classes also taught by Professor Wiest, have similar policies.
Besides the rule prohibiting paid platforms, the neuroscience department does not use a department-wide policy.
“[I] felt that to some extent [NEUR 125Y] is about AI, so I wanted [students] to be free to explore what’s out there,” said Wiest, given that artificial intelligence “will be used more and more as a tool.”
Wiest also emphasized the importance of “getting experience and seeing it’s not always right” when using AI platforms.
Wiest cited the use of AI in processing large amounts of data and in explaining how proteins are folded as particularly productive uses of artificial intelligence.
In courses taught by Professor Mustafaraj, policies on the use of artificial intelligence can vary depending on the class. In “CS 111: Computer Programming and Problem Solving,” the use of generative AI is prohibited in most circumstances. Mustafaraj explains that the code provided by AI programs can often be too advanced for students in an introductory computer science course, although AI can be used for one in-class project where students observe the challenges of engaging with AI when they are not familiar with a programming concept.
This policy is enforced by software that can flag possible AI-suggested programming concepts not used or taught in class, and Mustafaraj notes that last year, “we were able to identify several instances where students had used generative AI to write their code.”
However, policies regarding the use of generative AI vary across levels. In “CS 299/PHIL 222: Research Methods for Ethics of Technology,” AI is allowed to be used to help build technology such as websites, visualizations, and interactive experiences. However, AI cannot be used for written assignments. Mustafaraj explains that while AI can give research summaries, essays are meant to be based on the writer’s original thoughts and ideas. “Writing is the process of thinking…by outsourcing the process of writing, you are outsourcing the process of thinking,” she says.
Mustafaraj is also a member of a newly formed AI working group established on Oct. 15, 2024, established after an ad hoc committee was created last year. The committee is charged with providing recommendations on how to best engage with AI regarding pedagogy, research, and Wellesley in the world as well as “to understand what people are thinking with respect to AI use, …survey students [and] engage with various offices on campus,” Mustafaraj explains. Members of the committee include several faculty members; the directors of PLTC, Career Education, and LTS, as well as the Chief Justice, a student position. As of Nov. 1, the committee has held one meeting.
Although the honor code may provide some protection against the dishonest use of artificial intelligence in education, it is also notoriously difficult to detect artificial intelligence in schoolwork. While courses such as Mustafaraj’s in the CS department can detect AI usage more accurately by finding code that has not been taught in class, AI detectors for more abstract work can be inconsistent, and often erroneously identify work created by humans as AI-generated as the International Journal for Educational Integrity observed in a 2023 study.
This raises concerns that students could be falsely found to be in violation of the honor code if these tools are relied on to detect work generated by artificial intelligence. A college-wide policy on the use of artificial intelligence in academic settings may be useful in reducing confusion regarding whether AI use is permitted, and to what extent, but also runs the risk of becoming outdated or outright harmful to both students and educators as artificial intelligence technologies continue to advance.
Contact the editor responsible for this story: Valida Pau, Galeta Sandercock