Tech2 hrs ago

AI Hiring Tools Favor Resumes Written by Same Model, Study Shows

Research shows AI hiring systems favor resumes made with the same LLM they use, boosting selection odds by 23‑60% amid over 300k tech layoffs early 2026.

Alex Mercer/3 min/US

Senior Tech Correspondent

TweetLinkedIn
Today's Video Headlines

Today's Video Headlines

Source: NypostOriginal source

**TL;DR Research shows AI hiring systems prefer resumes generated by the same large language model they use, boosting selection chances by 23% to 60%. This bias appears alongside over 300,000 tech job cuts announced early 2026.

**Context Many companies rely on applicant tracking systems (ATS) to filter resumes before a human recruiter looks. These systems now often embed large language models (LLMs) that assess wording, skills, and fit. When the same LLM that helped a candidate write a resume also evaluates it, a feedback loop can form. Researchers call this tendency self‑preference, noting it advantages outputs that mirror the model’s own style. Because LLMs learn from massive datasets, they often reproduce prevalent phrasing patterns, making self‑generated text statistically more familiar to the same model.

**Key Facts In the study, LLMs acted as evaluators and were given both human‑written resumes and AI‑generated versions of equal quality. The models consistently chose their own output over the human version. Across 24 different occupations, the likelihood of selecting a resume crafted with the same LLM ranged from 23% to 60% higher than alternatives. The effect was strongest in accounting, sales, and finance roles. Separately, Challenger, Gray & Christmas tracked layoff announcements from January through April 2026. They recorded more than 300,000 job cuts, with the technology sector comprising the majority of those reductions.

**What It Means The self‑preference bias can push qualified applicants aside if they use a different AI tool or write their resume without assistance. Employers relying on these ATS may unintentionally shrink their candidate pool, overlooking talent that does not match the model’s linguistic patterns. Job seekers may feel pressure to adopt the specific LLM favored by target companies, adding a new strategic layer to applications. Regulators and industry groups have begun to scrutinize algorithmic bias in hiring, but concrete guidelines remain scarce. Legal experts warn that unchecked bias could expose firms to discrimination claims under existing employment laws. Monitoring how firms adjust their ATS weighting, whether they audit model outputs for fairness, and if policy makers introduce transparency rules will be key developments to watch.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...