Half a decade ago, few colleges had formal policies on artificial intelligence (AI). Today, nearly every syllabus at St. Olaf has a section dedicated to it.
Since the release of GPT-4 and its subsequent iterations, large language models have transformed into both a partner and threat for educators. On one hand, generative AI (GenAI) is seen as efficient, a relief from the remedial tasks of everyday life. On the other, it poses numerous ethical questions, from plagiarism to the environmental impact of the massive data centers that train and run AI models.
This conversation has landed squarely on the Hill, prompting faculty at St. Olaf to fundamentally readdress how to uphold the values of a liberal arts education next to new terms like “AI-readiness.”
According to Provost and Vice President for Academic Affairs Tarshia Stanley, this is nothing new.
AI has been present in curriculum since its inception. Whether it’s in the data of a computer science course or through the dialogue of an environmental studies course, AI has already been a part of academic conversation.
“It’s not like we have to stop what we’re doing and go bring AI in. It’s already in,” Stanley said in an interview with The Olaf Messenger.
Since stepping into her role as provost this summer, Stanley has become involved in much of the conversation around AI at St. Olaf — even if it didn’t begin with her.
Establishing guidelines
Last spring, former Provost Marci Sortor initiated a team of faculty and staff tasked with creating guidelines for professors to base their interactions with AI on within their class curriculum.
By the end of the summer, the AI working group produced a comprehensive set of materials, including an ethics statement, example syllabus statements for varying levels of AI-use, statements on critical study and misuse, and a list of resources for faculty members.
These guidelines, which were shaped by St. Olaf’s mission statement, call for critical thinkers “who inquire about the world discerningly and with curiosity” and “who steward our shared social, environmental, and technological realities with purpose and care.”
Associate Professor of French Sean Killackey was a part of the AI summer working group. In his conversation with faculty members during the fact-finding phase of the guidelines, he encountered a variety of opinions on AI.
“Some of them thought this is going to be the end of thinking, that we’re going to end up with a generation coming out of college that has never read a book, that has never wrestled with ‘how do I respond to that particular type of thought by that philosopher?’” Killackey said.
However, to Killackey, there is a middle ground: one that acknowledges the risks and the benefits. Yet, it requires effort on the professor’s part.
“I think we do have a responsibility as faculty members, [to] dig into this a little bit and learn about it so that we are able to help our students address this really disruptive force in our society,” Killackey said.
Without that guidance, he added, students risk entering a workforce that increasingly expects familiarity with AI.
“If the answer is, ‘No, my professors completely forbid it in any way. I don’t know anything about it,’ I think that’s going to disadvantage our students in the future,” Killackey said.
Since the completion of summer group’s work, Stanley has reconstituted the committee, now focusing on the next steps in the conversation, such as the use of contained tools like CoPilot and what AI knowledge will look like across the different disciplines.
Faculty approach
Amongst the broader, campus-wide conversation, faculty have begun to consider what role — if any — AI should play in their individual classrooms.
Out of 30 professors across 15 disciplines polled by The Olaf Messenger, 73% included a formal written AI policy in their syllabus. Nearly 37% described themselves as neutral toward AI and student learning, while 33% viewed it negatively — a split that reflects both uncertainty and skepticism across campus. Only 13% viewed AI as having a positive impact on student learning.
“Faculty are challenged to find an ethical approach to developing learning environments,” Associate Professor of Computer Science Olaf Hall-Holt said in an interview with the Olaf Messenger.
Hall-Holt has been exploring AI and its usage within the computer science and statistical disciplines.
“The answer, as it were, to some of the challenges in trying to have a level playing field for everyone to make use of AI,” Hall-Holt said. “is to actually customize the AI in certain ways.”
During his sabbatical last year, he co-wrote a paper, titled “Data Science in Statistics Curricula: Preparing Students to ‘Think with Data.’” The paper emphasizes an approach that prioritizes the process of thinking rather than simply arriving at a final result. Like fire, Hall-Holt says, AI can be shaped to bolster students’ learning, rather than hurt it.
Faculty perspectives on AI, however, are far from uniform. Political Science Professor Douglas Casson, for example, chose not to include an AI statement in his syllabi.
“You can’t make laws unless you can enforce them,” Casson said.
Instead of imposing a rule, Casson opened course policy up for conversation, asking his students what they thought about AI. Interestingly enough, his class opted not to allow AI.
To Casson, this highlights what students truly are at college for: to learn.
“If you ask students why they are in college, and what they want out of their education, they are going to be honest, and they’re going to say things like, ‘I don’t want to just sit in front of ChatGPT and produce things,’” Casson said. “That is soul draining, right?”
The liberal arts and AI
St. Olaf’s mission statement calls to challenge students to “excel in the liberal arts” by “examining faith and values” and “exploring meaningful vocation in an inclusive, globally engaged community.”
Liberal arts, then, some argue, is the perfect place for something like AI to flourish. Proponents such as Stanley see the liberal arts students as a needed commodity within the workforce.
“The liberal arts education is breadth and depth,” Stanley said. “They [companies] need people who can think and people who can understand from abroad and a deep perspective of what it is to make a decision.”
However, before St. Olaf fully embraces the technology, some faculty argue that the college must critically reflect upon the ethical questions that engaging with AI poses, including its issues with intellectual property theft, its negative effect on the environment, and the potential for overreliance on machine output. These issues, for some, go against what the liberal arts stand for.
“I think a liberal arts education really involves the development of human judgment,” Casson said. “Judgment is this complicated capacity that involves learning about your environment, using your creativity and maybe even your courage to think things through for yourself, and then arriving at conclusions that you take responsibility for.”
These questions, Stanley acknowledges, must be answered.
“It’s a question for our society,” said Stanley. “Our faculty, of course, are going to help lead our students in that, but it’s a question that won’t stop in the classroom. It’s a question that you’ll have to be thinking about after you graduate for the rest of your lives.”