Is There a Place for ChatGPT in Academia?

Scientific writing is an essential skill for researchers, and it is at the doctoral level that researchers begin to hone that skill in earnest.

Last October, Distinguished Professor James Wang contributed to a story in Patterns journal titled, “How Our Authors Are Using AI Tools in Manuscript Writing.” He was asked about the ethical use of generative AI tools—large language models (LLMs) like ChatGPT—during manuscript writing and what benefits and risks such tools might bring to the process. As an experienced research writer, Wang has found that this technology can make the process more efficient. But he stressed that it should be used as a supplement, not a replacement, for critical thinking and creative writing.

The topic generated lively conversation among IST faculty. Some believe that LLMs have a place in research writing, while others worry that it may prevent students from developing necessary skills.

James Wang, Distinguished Professor

“ChatGPT should not be used to draft a paper but can be used to improve readability and language. By interacting with the technology during the revision process, students—particularly non-native speakers—can learn about grammar and vocabulary. If we clearly explain to PhD students what's acceptable and what's not, most will understand and do the right thing.”

Amulya Yadav, Associate Professor

“While I recognize that tools like ChatGPT are here to stay and can be incredibly powerful for accelerating certain aspects of research, I believe it’s essential that graduate students first build a strong foundation in core communication skills, particularly writing and presenting skills. These are (or at least they have been) critical competencies for any researcher, and relying too heavily on generative tools too early can short-circuit that learning process. I’m not opposed to students using AI tools to support their work, but I think we have a responsibility to teach them—and to learn for ourselves—how to use these technologies thoughtfully and responsibly.”

Dana Calacci, Assistant Professor

“I am not yet convinced that using generative AI will mean that students will ‘never learn’ how to write papers well. I use GenAI at most stages of my writing but ask grad students who are writing papers to work differently to help them develop their own eye and style. I also encourage students to read actively so they can internalize a sense of what good writing looks like—I'm far more concerned by students not reading deeply than by students using GPT to edit, shorten, or make early drafts of texts.”

Dongwon Lee, Professor

“Our recommendation for graduate students is to employ LLMs in a limited manner, akin to using Grammarly or a spellchecker as writing assisting tools. The students themselves should provide rationale for the proposed research methods and expected outcomes and demonstrate interdisciplinary research thinking by synthesizing literature, concepts, and methods.”

Of course, the temptation to use AI tools like ChatGPT for writing does not just apply to doctoral-level students. But while some faculty may worry that undergrads might not recognize the importance of learning writing skills, others are not so pessimistic.

Vasant Honavar, Professor

“Whether the use of GenAI tools is appropriate depends very much on the context, the intent, and the educational goals. In some settings, it may be perfectly reasonable to use Gen AI for brainstorming. And the use of GenAI may be unavoidable in courses that teach how to use GenAI tools.”

But even when the use of GenAI is permitted by the instructor, it’s important that guidelines be in place.

“Turning in AI-generated content as one’s own without explicitly acknowledging the use of GenAI undermines academic integrity—it’s no different than hiring someone to write a paper for you or copying content from someone else’s work,” Honavar said. “In courses that focus on developing essential skills such as critical analysis, creative synthesis, or argumentation, reliance on GenAI defeats the entire purpose of the course.”

Honavar was pleasantly surprised by his undergraduate students’ understanding of this. He asked a first-year gen ed AI class to discuss the pros and cons of using large language models to help with writing, and they proposed reasonable guidelines:

  • The ideas must be your own and not a regurgitation of what others have written.
  • You need to be able to stand by every detail of what is written.
  • You need to be able to cite your sources accurately.
  • You need to be able to judge whether a piece of writing makes sense and is well-written, which means you have some idea of the subject matter and what it means to write well.

"I’m not opposed to students using AI tools to support their work, but I think we have a responsibility to teach them—and to learn for ourselves—how to use these technologies thoughtfully and responsibly.”

Amulya Yadav,, Associate Professor

“The students concluded that while large language models may be good to use for cleaning up what you have written, they should not be relied on to write for you,” Honavar said.

“We need to understand when these technologies can enhance students' work and when it's more important to engage in the process ourselves,” Yadav said. “At the same time, it is incredibly important to update our educational curricula so that we impart our students with the skills they will need to succeed in workplaces where ChatGPT usage has become the norm.”

Honavar agrees.

“Instructors need to provide clear, nuanced policies, ideally tailored to the objectives of specific courses, or even specific assignments,” he said. “And in settings where we want to discourage the use of GenAI tools, we should design assignments and evaluation methods (e.g., oral examination) to reduce the temptation for blind reliance on such tools.”

Opinions and policies about the use of GenAI in academia will continue to evolve, according to Lee.

“We are only witnessing early examples and scenarios of what is possible with GenAI tools,” Lee said. “New creative use cases will emerge beyond what we can imagine now, so I don’t think banning the use of large language models for our students’ writing is the way to go. Students need to learn how to use them responsibly, understanding both the pros and cons.”

Also in This Issue