
Faculty across the United States are sounding an alarm about how quickly generative AI has seeped into student work. A new wave of research finds that 95% of professors believe these tools are pushing learners toward unhealthy dependence on automation and away from the kind of slow, difficult thinking that higher education is supposed to cultivate. The concern is not simply that students are using new technology, but that they are outsourcing the very skills colleges exist to teach.
At the same time, universities are under pressure to integrate AI into classrooms, and major tech companies are racing to sell education-focused tools. I see a widening gap between the speed of adoption and the pace of reflection, with faculty trying to protect critical thinking while the AI ecosystem accelerates around them.
The survey that crystallized faculty fears
The clearest snapshot of this anxiety comes from a recent National Survey that asked professors how generative AI is reshaping student learning. In that research, 95% of respondents said the technology will increase student overreliance on artificial tools, a figure that turns a vague worry into a near consensus. The project, formally titled National Survey, College Faculty Fear Student Overreliance, Diminished Critical Thinking Among Learners Who Use Ge, was framed around a simple question: what happens to reasoning skills when a chatbot can draft your lab report or history essay in seconds.
The survey, conducted as a broad faculty research effort, did more than tally unease. It documented specific fears that generative AI is eroding students’ ability to analyze sources, construct arguments, and persist through complex problems without automated shortcuts. The phrase “diminished critical thinking” is not rhetorical flourish in this context, it is the core diagnosis from faculty who see assignments that look polished on the surface but reveal shallow understanding when students are pressed to explain their own work aloud.
How AI is changing student habits in the classroom
Behind the headline number is a deeper shift in how students approach everyday tasks, from reading assignments to problem sets. Many professors describe learners who now start with a prompt to a chatbot instead of a blank page or a primary text, effectively turning generative AI into the first and last stop for research. In the same National Survey, faculty reported that this pattern is not limited to writing, but extends to brainstorming, coding, and even planning study schedules, all of which can weaken the habit of wrestling with material directly.
The Elon and AAC&U collaboration that produced these findings relied on a broad, non-scientific survey of college and university instructors across disciplines, capturing how these tools are affecting student performance in everything from introductory writing to advanced STEM courses. When 95% of the faculty in this survey say generative AI is pushing students toward dependence, they are describing a pattern they see in office hours and grading sessions, where learners struggle to reproduce or adapt work that appears fluent on the page but was heavily machine generated.
From critical thinking to “dangerous dependence”
Faculty concerns are not abstract philosophical debates about the nature of intelligence, they are grounded in concrete classroom harms. Instructors report that students who lean heavily on generative AI often skip foundational steps like outlining arguments, annotating readings, or checking sources, because the tool appears to handle those tasks for them. Over time, that habit can hollow out the mental muscles that critical thinking requires, leaving students less able to evaluate claims, spot bias, or transfer knowledge to new problems without automated help.
One national report on faculty attitudes notes that professors see this as part of a “perilous, automated future” for learners who never fully develop independent judgment. When I talk to instructors, they describe students who can generate a plausible answer with a chatbot but freeze when asked to explain their reasoning on a whiteboard or in a seminar. That gap between polished output and fragile understanding is what many now label dangerous dependence, a reliance on systems that can fabricate sources or gloss over nuance without the user noticing.
Cheating, shortcuts and the ethics problem
Alongside worries about weakened cognition, professors are blunt about the way generative AI is reshaping academic integrity. Many believe these tools make it easier to submit work that looks original but is not, from AI-written essays to code that was never actually debugged by the student whose name is on the file. In one prominent Survey, Faculty Say AI Is Impactful, Not In, Good Way, with instructors warning that cheating, plagiarism, and uneven learning outcomes will all be affected by AI.
Another national snapshot of faculty opinion found that an overwhelming 95% of instructors are concerned students will over rely on Generative AI as the default way to complete assignments. Those same instructors argue that colleges must “stress the ethical” dimensions of AI use, not just its productivity benefits, and call for explicit guidance on when tools like ChatGPT or image generators are appropriate. I see a growing recognition that without clear norms, students will treat AI as a black box assistant that quietly handles the hard parts, blurring the line between legitimate support and outright substitution of their own work.
Tech giants push AI into education while faculty tap the brakes
Even as professors voice these concerns, the technology industry is rapidly embedding AI into the infrastructure of schooling. Google and Microsoft are rolling out new classroom features that promise to automate lesson planning, personalize practice exercises, and generate feedback for students at scale. At the Bett UK conference, Google and Microsoft highlighted their biggest education-focused AI push so far, positioning these tools as essential upgrades for modern classrooms.
Yet even in these rollouts, teachers are described as cautious about AI and worried that it could harm learning if not used carefully. That tension mirrors what I hear from faculty who feel caught between institutional pressure to innovate and their own sense that students are already too quick to offload thinking to machines. The same Elon/AAC&U Key Findings that flagged 95% concern about overreliance also noted that faculty do not feel fully prepared to manage this influx of tools. They are being asked to integrate AI into syllabi, assessment, and advising while still trying to understand how it is reshaping the basic habits of mind that a college degree is supposed to represent.
More from Morning Overview