Pope Leo Warns AI Cannot Replace Human Moral Judgment, Urges Protection of Children
Pope Leo says AI lacks moral discernment, warns children are vulnerable to algorithmic manipulation, and calls for safeguards to protect human dignity.

TL;DR
Pope Leo warned that artificial intelligence (AI) cannot replicate moral discernment or genuine human bonds, stressing that children are especially open to manipulation by algorithms (sets of rules). He called for safeguards that protect personal life and social development in the age of artificial intelligence (AI).
Context
In June 2025 Pope Leo addressed government leaders in Rome, saying personal life outweighs any algorithm (set of rules) and that relationships need spaces beyond what machines can offer. He repeated the message at the AI for Good Summit in July, where his secretary of state delivered a note urging technology to serve the common good. Later in November he spoke to AI builders, medical experts, and a conference on children’s dignity, consistently linking ethical AI to human dignity and the protection of minors.
Key Facts
He stated that personal life is more valuable than any algorithm (set of rules) and that social relationships require development spaces far beyond what any soulless machine can pre‑package. He also said AI can mimic human reasoning and perform tasks quickly but lacks moral discernment and the ability to form genuine relationships. Finally, he noted that children and adolescents are especially susceptible to manipulation by AI algorithms (sets of rules) that can sway their decisions and preferences.
What It Means
The pope’s remarks frame AI ethics around human dignity, insisting that technological progress must not erode moral judgment or interpersonal ties. For policymakers, this suggests updating data‑protection laws and creating standards that monitor how young people interact with AI systems. Educators and parents are urged to provide ongoing guidance so that AI supports growth rather than distracts from personal development. The emphasis on moral discernment implies that developers should embed ethical review into design cycles, not treat it as an afterthought.
What to watch next
Watch for forthcoming Vatican‑led guidelines on AI for children, potential EU or US legislative proposals on algorithmic (set of rules) transparency for minors, and any pilot programs that pair AI tools with supervised educational settings.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...