Beyond Cheating: How Rutgers Students Are Using AI — and Navigating Different Rules 

Read Time:3 Minute, 48 Second

Artificial intelligence tools such as ChatGPT and Grammarly are becoming a regular part of how students at Rutgers University study, but faculty responses to their use remain sharply divided. 

While some students say AI helps them better understand course material, some professors argue the technology undermines learning and contributes to academic dishonesty. The contrast reflects an ongoing debate across higher education over how, or whether, AI should be integrated into the classroom. 

Michael, a current Labor Relations student at Rutgers, said he uses AI tools primarily as a study aid rather than a substitute for completing assignments. 

“I do use AI tools like ChatGPT and Grammarly for studying,” Michael said. “I mainly use them to help me understand difficult concepts, break down readings into simpler explanations, generate study questions, or organize my thoughts before I start writing. I treat it more like a study partner or tutor rather than something that writes my assignments for me.” 

According to Michael, AI’s usefulness lies in its ability to provide immediate clarification when students encounter dense readings or unfamiliar concepts. He said he is careful to avoid using the tools in ways that would violate course rules. 

“I avoid using AI to write full essays or complete assignments that are meant to reflect my own analysis,” he said. “I also don’t use it during exams or for anything that would clearly violate academic integrity policies. I try to use it as a learning aid, not as a shortcut.” 

However, faculty members who oppose AI use say the distinction between “study aid” and “shortcut” is often blurred in practice. 

Dr. Tom Cioppa, a Rutgers professor in the Social Sciences, said he has serious concerns about AI tools in academic settings, particularly regarding cheating and student engagement. 

“AI and cheating go hand-in-hand with students,” Dr. Tom said. “I have yet to see any valuable application for AI in my classrooms, but I have seen a very large amount of cheating since students first became aware of AI models such as ChatGPT two years ago.” 

Dr. Tom said his concerns extend beyond academic integrity to how AI affects students’ learning habits.

“AI makes finding information so easy that it discourages students from actually having to do any heavy lifting to find the information they are looking for,” he said. “AI rewards laziness rather than personal inquiry and initiative.” 

He added that AI-generated summaries may encourage surface-level engagement with material rather than close reading and critical analysis. 

“Perhaps most insidiously, it provides ‘executive summaries’ rather than delving into deep details,” Dr. Tom said. “The last thing students need in this day and age is encouragement to read less, which, unfortunately, they are already doing in ever-growing amounts.” 

Because of these concerns, Dr. Tom said he does not permit the use ofAI in his courses. 

“I do not use AI in my courses, which are in the social sciences,” he said. “I am sure in certain academic disciplines it makes total sense to have students learn and use AI. But social scientists study people, not 1s and 0s.” 

Rutgers currently allows individual instructors to set their own policies regarding AI use. University guidance published by Rutgers Information Technology advises students to follow course-specific rules and notes that unauthorized AI use may constitute a violation of academic integrity policies. 

This decentralized approach means expectations can vary widely across classes. Michael said navigating these differences requires attentiveness and communication. 

“Yes, different professors definitely have different rules,” he said. “Some allow AI for brainstorming and outlining, while others restrict it heavily. I always check the syllabus and, if I’m unsure, I ask. I navigate it by staying transparent and making sure I’m following whatever guidelines the professor sets for that course.” 

The lack of uniform standards reflects broader uncertainty in higher education as institutions adapt to rapidly evolving technology. Supporters of limited AI integration argue that students will encounter similar tools in professional environments, while critics worry that reliance on AI may weaken foundational skills. 

For now, the debate continues largely at the classroom level. As students experiment with new tools and professors enforce varying policies, AI’s role at Rutgers remains unsettled. 

What is clear, however, is that artificial intelligence is no longer a hypothetical issue on campus. Whether viewed as a helpful study companion or a threat to learning, AI is reshaping how students study, and how professors define academic work.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post A Little Over A Month Break: How Has So Much Time Away From School Taken Its Toll on Students
Next post State of the Unions: RUC Unions Face Tension with New Contracts, Facial Tracking Technology