ASU Faculty Protest AI Tool That Repackages Their Lectures Without Permission
Arizona State professors discover Atom AI app reuses their lectures without consent, raising concerns over ownership and misuse.

From left to right: Fulton Schools computer science doctoral students Priyanuj Bordoloi, Yash Shah, Ashish Raj Shekhar and Shiven Agarwal pose in front of a project poster with Gupta. The team presented a demo to academic leaders at the ASU Future of Learning Community Fest in February in Tempe, Arizona. Photo courtesy of CoRAL
TL;DR: Arizona State University’s new AI service, Atom, is remixing faculty lectures into custom learning modules for $5 a month, sparking outrage over consent and misuse risks.
ASU launched Atom, a web app that charges $5 per month for unlimited AI‑generated learning modules. The tool asks users a few questions, then assembles readings, quizzes and video clips drawn from the university’s online instructional content. Within five minutes, the AI produces a personalized course that cites a handful of ASU experts.
Several professors were blindsided when they saw their own video lectures, slide decks and assignments repackaged without notification. Literature professor Chris Hanlon described the experience as “Frankensteinian,” noting that the AI reproduced his face and altered a decades‑old Canvas video. He said, “I was really taken aback to see my face looking back at me a few moments later.”
Religious studies professor Michael Ostling warned that the same technology could be weaponized. He argued that a malicious user could request a module on a contentious topic, receive a curated set of clips, and present them as fabricated evidence against a faculty member. Ostling added, “It would just be amazingly easy… to get a whole bunch of material from classes on race and gender and turn it into ‘evidence.’”
The university’s intellectual‑property policy places ownership of most instructional material with the Board of Regents, but the policy does not clearly restrict how AI systems may reuse that content. By uploading files to Canvas, the learning‑management system, professors implicitly allow the platform to operate and provide its services, a clause whose interpretation remains vague.
Atom’s early rollout has already produced errors. In Hanlon’s test module, the AI mislabeled literary critic Cleanth Brooks as “Client” Brooks and paired the clip with AI‑generated context that failed to explain its relevance. Ostling fears such inaccuracies could mislead students and expose faculty to reputational damage.
The faculty backlash highlights a broader dilemma: universities are eager to experiment with AI, yet lack transparent consent mechanisms for the data that powers these tools. As ASU’s president Michael Crow described the project as an “experiment” still in early stages, professors demand clearer guidelines on ownership, consent and safeguards against abuse.
What to watch next: ASU’s response to faculty pressure, potential policy revisions on AI use, and whether other institutions will adopt similar tools amid growing concerns over academic consent and data security.
Continue reading
More in this thread
Microsoft Targets $4.06 EPS on $81.4B Revenue as Azure Taps OpenAI
Alex Mercer
Musk Blames OpenAI Lawyer for ‘Trick’ Questions While Defending $38 Million Charity Donation
Alex Mercer
Musk Claims OpenAI Lawyer Tricked Him While Citing $38 Million Charity Donation
Alex Mercer
Conversation
Reader notes
Loading comments...