Skip to main content

Learning to solve problems: the potential of ChatGPT in the classroom

Lipscomb professors weigh in on use of artificial intelligence as it relates to pedagogy and academic integrity

Keely Hagan | 615-966-6491  | 

Laptop computer on desk

The use of artificial intelligence (AI) in writing has been spreading like wildfire and sparking debate across college campuses since ChatGPT, the large language model (LLM)-driven chatbot developed by OpenAI, was launched in November 2022.

ChatGPT is used to rapidly answer questions and generate written content – such as essays, poems, emails, scripts and term papers — and has made headlines for its ability to pass exams at top law, business and medical schools across the nation.

Julia Osteen

Julia Osteen

At Lipscomb University, both the Center for Teaching & Learning (CTL) and the Academic Integrity Council are organizing conversations around the topic of artificial intelligence as it relates to both pedagogy and academic integrity.

In a faculty meeting early in the spring semester, Julia Osteen, director of professional development of the CTL and Laura Morrow, director of strategic initiatives and collaborative partnerships of the CTL, discussed ChatGPT. “Our focus in the CTL is to assist faculty to use ChatGPT for good and not evil, recognizing that with power (such as new technology) comes great responsibility,” they quipped.

As a helpful tool, ChatGPT can effectively create scenarios and case studies or rubrics. It can also be used as an exercise for students to analyze and assess the quality of its output. One of its most valuable uses is in the process of brainstorming a starting point for assignments. Yet, Osteen and Morrow acknowledge that it may also be used for cheating.

Laura Morrow

Laura Morrow

“Since the dawn of time students have tried to find shortcuts to minimize effort in succeeding in classes—from writing on the palms of their hands to hiding cheat sheets and paying for papers on the internet,” they said. “When there’s desperation, they will find a way. With this said, it’s important to recognize that we do not believe artificial intelligence should drive decisions instructors make in the classroom.” 

Some critics have expressed concern that ChatGPT will lead to a generation of college degree-earning people who don’t know how to write or think for themselves. Osteen and Morrow offer college faculty a hopeful perspective to address that fear: Some of the best ways to encourage students to do their own work are the same best practices used in pedagogy. Assignments designed to encourage high-level thinking and to develop practical wisdom that focus on the process of learning, using peer review, personal reflections and connections to class-specific content are processes in which AI is currently limited.

Joe Ivey

Joe Ivey

“I think of it as Wikipedia and Google on steroids,” says Joe Ivey, professor of management, College of Business. “They are compilation tools and that’s essentially the way I instruct folks to use ChatGPT in class. It’s a great way to get started, to point you in a direction, but in my classes I will not accept it as a citation. Students need to dig in and find the primary sources.”

Ivey points out that while ChatGPT can help generate ideas, create an outline and find sources, it has significant limitations. Users must know how to craft the right question to get a useful, detailed response, and they must have a good understanding of the subject matter in order to detect errors in the AI-generated responses. Critics have noted that ChatGPT is greatly limited by its inability to fact-check and has no sense of factuality.

“For example,” explains Ivey, “when I asked it what the U.S. corporate tax rate was, it gave me the wrong answer. If students are going to use it, they must at least be a little bit aware of the correct answer.”

In response to a question about the reliability of its answers, ChatGPT generated an answer stating it attempts to verify the accuracy of information by cross-referencing multiple sources whenever possible, but it is not infallible and can make mistakes. It concluded, “Therefore, it’s always a good practice to fact-check the information I provide by consulting other reliable sources.”

It is trained to generate text by continually predicting the most likely next word or phrase based on the input until it forms a complete response. Educators agree that this process cannot generate rigorously researched and coherent college-level essays.

Besides its deficiencies with citable primary sources and accurate information, critics worry about its inability to make value judgments.

“I was playing around with it a bit and typed in the title of a paper I was writing called ‘Is American Capitalism Wise?’ Its response was ‘here are what some people say are wise and here are what some people say are unwise, but it all depends on context.’ If you factor that out, you’re going to have something but it’s up to you to sort it out and come up with a value judgment,” says Ivey.

“I am not particularly worried about a student submitting a GPT-generated paper because the answers are not terribly good and the text is very generic.” Ivey adds, “The prompts I provide are very specific and require current data. I’ve run them through GPT and basically get back, ‘I can’t answer that.’”

Despite its limitations, there are 40 million users going to ChatGPT on a daily basis. As the use of ChatGPT booms, so does the market for tools that can decipher whether text was written by a human or AI. Combating the use of ChatGPT with detection services creates another ethical dilemma for faculty, says J. Caleb Clanton, who holds the Distinguished University Chair in Philosophy and Humanities, College of Liberal Arts & Sciences.

“I don't think we should react by engaging in a technological arms race,” Clanton reflects. “We are going to get into an expensive game where we chase one technological problem with a new technological fix.”

Clanton addressed an earlier version of this most recent firestorm on academic dishonesty in 2009 in his paper “A Moral Case Against Certain Uses of Plagiarism Detection Services.” At that time students were cheating using Google. While he acknowledges that ChatGPT throws new technological resources in play for students to cheat, he says the underlying moral issue is basically the same and believes his old argument still holds here.

“I have moral reservations about plagiarism detection services,” explains Clanton. “We ought to try not to treat our students as presumed guilty and in need of policing or a set of sanctions that I enforce upon them as a professor. I think that’s the wrong relationship between professor and student.

“The point of the university is to ultimately shape people in ways that can allow them to flourish. And that’s about virtue formation, and always has been. I tell my students that ‘if you want to cheat your way through college, you can do that. You might be able to beat me, in terms of my ability to know what you did or didn’t write.’” He continues, “‘If you do that, you need to realize that you’re a worse person than you think you are. This is an opportunity for you to exercise the proper kind of moral judgment. If you will do it here, you’ll do it everywhere for the rest of your life. And so go ahead, make your decision. You ultimately have to be the one choosing to do what is right.’”

While the effect of AI tools like ChatGPT on learning and ethics is hotly debated, there is little doubt students will continue to use them. Technology experts predict ChatGPT will soon become a common daily-use tool, like the calculator before it, that will be followed by another innovative tool that reignites debates around academic integrity.