UC San Diego Alumna Warns AI Risks Human Cognitive Skills, Highlights 10% 'Cyborg' Users Who Outperform AI Alone
Vivienne Ming, a UCSD alumna, warns AI can erode human cognitive skills. Her research shows "cyborg" users who actively challenge AI outperform passive users.

Woman lying on a medical bed wearing a cap with multiple electrodes and wires attached to the head. In the background, a healthcare professional in a white coat is examining brain scan images displayed on a monitor.
TL;DR
A UC San Diego alumna and machine learning expert warns that AI development risks eroding critical human cognitive skills, emphasizing that active engagement with AI is crucial for human users to outperform AI alone.
Vivienne Ming, a UC San Diego alumna and theoretical neuroscientist, highlights a critical challenge posed by artificial intelligence. Her research, detailed in her new book “Robot-Proof,” argues that the primary risk of AI extends beyond job displacement. Instead, she identifies a more fundamental threat: the potential for passive AI use to diminish essential human cognitive abilities. Ming contends that the machine learning industry frequently focuses on making AI smarter without equally considering the human element in this evolving interaction.
Ming’s work points to a measurable decline in cognitive engagement when individuals use AI passively. Her experiments, which included UC Berkeley students, illustrate AI’s analytical power. In one test, even the smallest open-source AI model consistently outperformed the best human participants in making complex predictions. This finding underscores AI's superior capability for certain tasks. However, a distinct cohort emerged from these trials. Roughly 10% of participants adopted an active, questioning approach with AI, a method labeled "cyborgs." These individuals did not just use AI; they challenged it, significantly outperforming AI systems operating in isolation.
This research suggests that effective human-AI collaboration hinges on active engagement. Passive acceptance of AI outputs risks cognitive erosion, as AI systems can automate entire thought processes, potentially disengaging human users. Ming clearly states, "We should be careful that what we're building doesn't automate away the very capacities that make us human." The "cyborg" strategy—where humans actively interrogate and build upon AI's insights—offers a model for augmenting human intelligence rather than diminishing it. This approach prioritizes developing human strengths that AI cannot replicate. As AI tools integrate further into daily life, tracking strategies that promote active human cognitive involvement will be essential.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...