Tech27 mins ago

MIT Writing Professor Flags AI‑Generated Student Stories After Confessions

MIT writing professor identifies AI‑written fiction after student confession, raising concerns about AI's impact on cognition and writing skills.

Alex Mercer/3 min/GB

Senior Tech Correspondent

TweetLinkedIn
MIT Writing Professor Flags AI‑Generated Student Stories After Confessions
Source: The GuardianOriginal source

A MIT fiction professor exposed two AI‑generated student stories during a workshop, after a student admitted using AI to avoid criticism; a 2025 MIT Media Lab study links AI‑assisted essays to reduced brain connectivity.

Context Since 2017, the professor has run fiction workshops for STEM‑focused undergraduates, emphasizing close reading and honest peer feedback. Students, many of whom last wrote fiction in middle school, are expected to critique each other's work in detail, a process that can feel threatening.

Key Facts During a recent semester‑opening workshop, the professor read the opening paragraphs of two submissions and recognized them as AI‑generated. The discovery prompted a discussion about the pressures that drive students to seek shortcuts. One student confessed she turned to AI because she feared appearing stupid and attracting criticism for weak writing. The professor’s identification of the AI‑generated pieces provided a concrete teaching moment about authenticity and the risks of over‑reliance on technology.

A preliminary 2025 study by the MIT Media Lab found that participants who used ChatGPT—a popular AI text generator—to write essays showed lower neural connectivity than those who wrote without assistance. Lower neural connectivity suggests reduced coordination between brain regions involved in complex thinking and language processing. The study adds empirical weight to concerns that habitual AI use may blunt cognitive skills essential for independent writing.

What It Means The incident highlights a growing tension in higher education: students balance the desire for flawless prose against the fear of exposing their developmental writing stages. The professor’s approach—spotting AI output and confronting it directly—reinforces the workshop’s core principle that honest critique, even when uncomfortable, is vital for growth. At the same time, the MIT Media Lab findings suggest that reliance on AI could undermine the very neural pathways that workshops aim to strengthen.

Educators may need to rethink how they address AI in curricula, moving beyond detection to fostering responsible use. As AI tools become more accessible, the next challenge will be measuring whether policy changes can preserve students’ cognitive development while still leveraging technology’s benefits.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...