Is There a Place for ChatGPT in Academia?
Scientific writing is an essential skill for researchers, and it is at the doctoral level that researchers begin to hone that skill in earnest.
Last October, Distinguished Professor James Wang contributed to a story in Patterns journal titled, “How Our Authors Are Using AI Tools in Manuscript Writing.” He was asked about the ethical use of generative AI tools—large language models (LLMs) like ChatGPT—during manuscript writing and what benefits and risks such tools might bring to the process. As an experienced research writer, Wang has found that this technology can make the process more efficient. But he stressed that it should be used as a supplement, not a replacement, for critical thinking and creative writing.
The topic generated lively conversation among IST faculty. Some believe that LLMs have a place in research writing, while others worry that it may prevent students from developing necessary skills.
But even when the use of GenAI is permitted by the instructor, it’s important that guidelines be in place.
“Turning in AI-generated content as one’s own without explicitly acknowledging the use of GenAI undermines academic integrity—it’s no different than hiring someone to write a paper for you or copying content from someone else’s work,” Honavar said. “In courses that focus on developing essential skills such as critical analysis, creative synthesis, or argumentation, reliance on GenAI defeats the entire purpose of the course.”
Honavar was pleasantly surprised by his undergraduate students’ understanding of this. He asked a first-year gen ed AI class to discuss the pros and cons of using large language models to help with writing, and they proposed reasonable guidelines:
- The ideas must be your own and not a regurgitation of what others have written.
- You need to be able to stand by every detail of what is written.
- You need to be able to cite your sources accurately.
- You need to be able to judge whether a piece of writing makes sense and is well-written, which means you have some idea of the subject matter and what it means to write well.
“The students concluded that while large language models may be good to use for cleaning up what you have written, they should not be relied on to write for you,” Honavar said.
“We need to understand when these technologies can enhance students' work and when it's more important to engage in the process ourselves,” Yadav said. “At the same time, it is incredibly important to update our educational curricula so that we impart our students with the skills they will need to succeed in workplaces where ChatGPT usage has become the norm.”
Honavar agrees.
“Instructors need to provide clear, nuanced policies, ideally tailored to the objectives of specific courses, or even specific assignments,” he said. “And in settings where we want to discourage the use of GenAI tools, we should design assignments and evaluation methods (e.g., oral examination) to reduce the temptation for blind reliance on such tools.”
Opinions and policies about the use of GenAI in academia will continue to evolve, according to Lee.
“We are only witnessing early examples and scenarios of what is possible with GenAI tools,” Lee said. “New creative use cases will emerge beyond what we can imagine now, so I don’t think banning the use of large language models for our students’ writing is the way to go. Students need to learn how to use them responsibly, understanding both the pros and cons.”
Connect with IST on LinkedIn
The power of IST is in our community. Join us on LinkedIn to stay connected with IST alumni and stay up-to-date the latest from the college.