Tech2 hrs ago

LAist Publishes AI Use Principles, Bans Full AI‑Generated Stories and Commits to Human Review

LAist sets strict AI guidelines, committing to human review for all AI-assisted work and banning fully AI-generated articles to maintain journalistic integrity and public trust.

Alex Mercer/3 min/US

Senior Tech Correspondent

TweetLinkedIn
Today's Video Headlines

Today's Video Headlines

Source: NypostOriginal source

LAist has published a comprehensive set of principles for artificial intelligence use, banning entirely AI-generated stories and mandating human review for all AI-assisted content.

LAist has established clear guidelines for artificial intelligence (AI) use, defining strict boundaries for the emerging technology in its journalistic practices. The public media outlet aims to be transparent with its audience regarding AI's role in the newsroom and where the organization draws ethical lines. This move reflects a growing industry effort to balance technological innovation with fundamental editorial integrity and public trust.

The new principles commit LAist to never publishing stories that are wholly generated by artificial intelligence. This core commitment underscores a dedication to human-driven journalism. Furthermore, journalists at LAist remain fully accountable for any AI-assisted work. Every piece of content that involves AI tools requires thorough human review, adhering to the same rigorous editorial standards applied to all other journalism.

These comprehensive principles, last revised on April 24, 2026, outline LAist's approach across its digital and audio journalism, newsletters, social media presence, and even internal use of generative AI. The document emphasizes that human judgment and ethical considerations must consistently guide the reporting and publishing process, ensuring that AI serves as a tool, not a replacement for human intellect.

The policy treats anything produced by an AI tool as unverified, necessitating human journalists to confirm all facts through original reporting and trusted sources before publication. This safeguard directly addresses concerns about AI's known potential to fabricate details or inadvertently reflect biases present in its training data. AI tools primarily serve as internal support for specific tasks, such as generating image alt text for accessibility, aiding research workflows, or limited translation efforts, always under human oversight.

LAist explicitly prohibits AI from replicating the voice or likeness of any journalist or person, reinforcing the irreplaceable role of human reporting and unique storytelling. The guidelines also ban the use of AI to create misleading images that could be mistaken for original photography or original visual reporting. Moreover, the organization strictly mandates that sensitive source material, unpublished reporting, or private audience and donor information will not be entered into public AI tools.

This framework sets a precedent for how public media outlets can integrate AI responsibly, prioritizing human editorial control, verification, and transparency. It aims to maintain public trust in an evolving media landscape by defining clear boundaries for AI's role. The implementation of these principles by LAist and the responses from other news organizations will offer valuable insights into the broader direction of AI integration in journalism.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...